Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dan_c

Pages: [1] 2 3 ... 7
1
Questions and Answers / Re: Obtain Fit Statistics Like in Quest
« on: April 12, 2021, 09:02:40 AM »
Hi Juliane,

in the show file you find the output from the population model which includes regression output and the latent variance-covariaince matrix (both unconditional, conditional and model R square if you include regressors in the population model). This is where you will find the mean and variance of the latent ability distribution. See e.g.:

Code: [Select]
REGRESSION COEFFICIENTS

Regression Variable

CONSTANT                   1.332 ( 0.115)
-----------------------------------------------
An asterisk next to a parameter estimate indicates that it is constrained
===============================================

UNCONDITIONAL COVARIANCE/CORRELATION MATRIX

Dimension

Dimension_1
-------------------------------------------
Variance                   0.625  ( 0.113)
-------------------------------------------
An asterisk next to a parameter estimate indicates that it is constrained

The person fit statistics in pfit are weighted mean squares/infit statistics with expectation 1.

2
Questions and Answers / Re: Obtain Fit Statistics Like in Quest
« on: March 16, 2021, 10:18:05 AM »
check out the output of the commands show and fit.

https://conquestmanual.acer.org/s4-00.html#show

Quote
tables =value list
If parameters output is requested, a total of eleven different tables can be produced.
...

The contents of the tables are:
...

2. The estimates, errors and fit statistics for each of the parameters in the item response model.
...
4.7.54.4 Examples

Code: [Select]
show;Produces displays with default settings and writes them to the output window.



https://conquestmanual.acer.org/s4-00.html#fit

Quote
Produces residual-based fit statistics.
...
4.7.25.4 Examples

Code: [Select]
fit >> fit.res;Uses the default fit design matrix and writes results to the file fit.res.



3
Hi Scherge,

The model you have specified:
Code: [Select]
item - book + item*book;
requests an item parameter for each item and book combination. That is, you have 8 levels of “book”, and therefore each traditional item is generalised to have a distinct item parameter for each book (in your case, the “book” main effect is the average shift in difficulty across the books, and each item*book effect represents the deviation from the main book effect for this item and book combination).

For this to be an identified model, there will need to be at least two valid responses for each item and book interaction (one in the zero category, and one in another category – this is because the identification constraint on the item*book effect is that the parameter for the last item within each book is the negative sum of the other parameters within the book. If you only have one valid response within that book then the parameter cannot be calculated (and in fact the design matrix for the model is mis-specified).

I have attached the itanal for the model:
Code: [Select]
item – book;You will see that you have only one response to book 1. This model is not identified. If you remove booklet 1, you can estimate this model (show file attached). Just note that you are not modelling the steps/categories within the items in this model. Each item score is simply multiplied by the person theta in the probability function and this is quite a restrictive specification.

4
Hi Scherge,

I think you are encountering issues because the latent correlation between your two dimensions is approaching 1. This is strong evidence of a unidimensional structure.

There are several ways you can get deviances and fit. The first is to not estimate standard errors - they are not required.

Code: [Select]
estimate ! stderr = none;
You can also try quick standard errors (assumes off diagonals of the variance-covariance matrix of the model parameters are 0). See the details in the manual: https://conquestmanual.acer.org/s4-00.html#est

Another way of considering dimensionality is user-defined residual fit statistics. You fit a 1D model and then test the hypothesis that the observed responses to items 1 and 4 (and alternatively 2 and 3) fit the expectation given the model (that is, this group of items, fit a 1D model). As with fit statistics for individual items, the expected value is 1. The fit command gives confidence intervals around the estimate. see https://conquestmanual.acer.org/s4-00.html#fit.

I have attached a show file and some fit analysis from a 1D run.

5
of course - I have removed the attachments.


6
Hi Jonas,

your issues is with the data file you have crated. It is encoded UTF8 with a byte order marker (BOM). Early versions of ConQuest 4 did not support this encoding. I would recommend creating files with a BOM - it is not part of the Unicode standard, and only seems to appear with MS Excel when exporting files.

ConQuest 5 will attempt to strip the BOM when reading the file. I would encourage you to upgrade: https://shop.acer.edu.au/acer-conquest-5.html

Note that in ConQuest 5, you can read and write CSV files directly and do not need to create a text (.dat) or SPSS file.

Attached is a converted data file (I did this with VS Code - which is free to use and an excellent text editor) and output files. Note the issue with the covariance matrix is still present in your 2D model, I would guess it is to do with the very high correlation between your dimensions (>0.95), the very narrow variance (~0.1-0.3) and small sample size. Note the model deviance implies the 1D model is a better fit anyway.


7
Questions and Answers / Re: Differential Item Functioning
« on: December 13, 2020, 11:42:16 PM »
I notice you are using the ETS DIF categories. ConQuest will give you those directly using the mh command:

https://conquestmanual.acer.org/s4-00.html#mh

Note that to calculate mh, you can't estimate a facet for your grouping variable, but rather specify a group in your syntax.

If I imagine your "TERM 2: level" effect is zero, then it looks like to me, your item parameter for the omitted level for SUPK01 is -2.012, and the the item parameter for the group "low" should be (-2.012 + 0 + -0.755) = -2.767. If your term 2 is not zero, then you need to consider that you have subtracted-out the mean shift amongst the items first, and then estimated the item-specific shift for that group. If you only have two levels in your term 2, then you can just take the Wald: abs(-0.755/0.247) =~3.05, p < 0.05. If you take two seperate calibrations, I think the usual approach would be to mean-centre the calibrations first (which has a similar effect to subtracting out the main effect for group) unless you had some other way of establishing the metric (e.g., an anchored item). Otherwise your are assuming the two calibrations are producing meaningful 0 values which may or may not be reasonable (e.g., if you can assume parallel/equivalent samples by group?).

Perhaps you can share your syntax and data?




8
Questions and Answers / Re: Differential Item Functioning
« on: December 11, 2020, 12:17:49 AM »
Can you insert a table with your working?

In this case I would have thought that the Wald static is given by xsi/SE and is testing the null that the item by group interaction for that item and that group is 0.

The link you give to memo 25, is using two seperate calibrations (one for each group) and calculating the difference in xsi and the SE of the difference by hand.

Just another thing to keep in mind, if you are using "quick" errors, you may be under-estimating the SE (and increasing you chance of observing DIF). See the manual for a discussion.

9
Questions and Answers / Re: Differential Item Functioning
« on: December 09, 2020, 03:02:57 AM »
I think that is sound logic.

In the case of a binary DIF variable this will be equivalent - depending on specification you will either get a single DIF parameter (deviation of this group from the item parameter estimate for the omitted group), or you will get two (deviation of this group from the mean of the groups, where the second group is constrained to be the negative sum of the first).

In the case of more than two groups, then your proposed chi square test is reasonable.

10
Questions and Answers / Re: Read EAP file
« on: November 23, 2020, 08:46:05 PM »
The format of the EAP file is described in the manual:

https://conquestmanual.acer.org/s4-00.html#show

Quote
For plausible values (estimates=latent) and expected a-posteriori estimates (estimates=eap):

The file will contain one row for each case. Each row will contain (in order):

Sequence ID
PID (if PID is not specified in datafile or format than this is equal to the Sequence ID)
Plausible values. Note there will be np plausible values (default is 5) for each of nd dimensions. Dimensions cycle faster than plausible values, such that for nd = 2, and np = 3, the columns are in the order PV1_D1, PV1_D2, PV2_D1, PV2_D2, PV3_D1, PV3_D2.
the posterior mean (EAP), posterior standard deviation, and the reliability for the case, for each dimension. Note that these columns cycle faster than dimensions such that for nd = 2, and np = 3, the columns are in the order EAP_1, PosteriorSD_1, Reliability_1, EAP_2, PosteriorSD_2, Reliability_2.

If you use the option filetype to export a CSV, SPSS, or Excel file, you will see column headers providing a name for each of these columns.

11
Questions and Answers / Re: Fit indices in ConQuest
« on: November 09, 2020, 10:39:26 PM »
Isa,

BIC is reported in the output from the SHOW command in ConQuest Version 5. See example from Example 1 (https://www.acer.org/au/conquest/notes-tutorials):

Code: [Select]
The Data File: ex1.dat
The format:  id 1-5 responses 12-23
No case weights
The regression model:
Grouping Variables:
The item model: item
Slopes are fixed
Cases in file: 1000  Cases in estimation: 1000
Final Deviance:                                13274.87615
Akaike Information Criterion (AIC):            13300.87615
Akaike Information Criterion Corrected (AICc): 13300.56785
Bayesian Information Criterion (BIC):          13364.67697
Total number of estimated parameters: 13
The number of iterations: 45
Termination criteria:  Max iterations=1000, Parameter Change= 0.00010
                       Deviance Change= 0.00010

You can download ConQuest 5 form the shop: https://shop.acer.edu.au/acer-conquest-5.html

12
Hi Isa,

If you have pre and post data, I would suggest estimating the growth in a single IRT model. One way is to create a wide file, load time 1 items onto one latent, and time 2 items onto a second latent, and anchor the difficulties of the items to be equal. The growth is then a function of the theta (theta2 - theta1 = growth). If you want to analyse the growth (e.g., regress covariates on the growth) you could choose a slightly different parameterisation, or you could then take PVs out of ConQuest and and fit whatever model (ANCOVA, mixed effects etc) you like in another stats package.

If you want to do it all in ConQuest, I would encourage you to read this article:

Quote
Wilson, M., Zheng, X., & McGuire, L. (2012). Formulating latent growth using an explanatory item response model approach. Journal of Applied Measurement, 13(1), 1.

I ran your model, and I think the difference between PV and WLE reliabilities is as described above. You have:

- Dim 1 and 2 have items that are very easy
- some cases have no observations on a dimension and therefore a WLE cannot be calculated
- generally few items per dimension (see Spearman-Brown prophecy formula to think about what the maximum reliability you might expect is)

13
Questions and Answers / Re: How to read a wle.csv file?
« on: October 13, 2020, 10:52:25 PM »
Hi Isa,

Check out the new, online ConQuest Manual. It includes lots of key information, including the formatting of output file:

https://conquestmanual.acer.org/s4-00.html#show

Quote
For maximum likelihood estimates and weighted likelihood estimates (estimates=mle or estimates=wle):

The file will contain one row for each case that provided a valid response to at least one of the items analysed (one item per dimension is required for multidimensional models). The row will contain the case number (the sequence number of the case in the data file being analysed), the raw score and maximum possible score on each dimension, followed by the maximum likelihood estimate and error variance for each dimension. The format is (i5, nd(2(f10.5, 1x)), nd(2(f10.5, 1x))). If the pfit option is set then an additional column is added containing the case fit statistics. The format is then (i5, nd(2(f10.5, 1x)), nd(2(f10.5, 1x)), f10.5)

14
Hi Isa,

I can only see your labels file attached. Can you attach the rest of your files?

If you are trying to explain a population parameter, like average learning gain over time, I would recommend you use PVs. Point estimates, including EAPs will result in biased secondary analysis. See for example:

https://doi.org/10.1016/j.stueduc.2005.05.005

15
ConQuest News / Re: ACER ConQuest Manual and Command Reference
« on: July 06, 2020, 11:15:34 PM »
The ConQuest manual has a new home:

http://conquestmanual.acer.org

Pages: [1] 2 3 ... 7