ConQuest Community
Help desk => Questions and Answers => Topic started by: peklyn98 on August 18, 2021, 07:30:59 AM

Hi,
I have been analyzing three different (but similar) instruments with the GPCM and the PCM. With the PCM, the EAP/PV reliability estimates are above .8 for all three instruments. With the GPCM, the estimates are in the .3.5 range. What could be going on here?

Can you tell me what version of ConQuest you are using by running:
about;
Can you also let me know what sort of analysis you are running? Can you reproduce the issue with one of our notes and tutorials? https://www.acer.org/au/conquest/notestutorials (https://www.acer.org/au/conquest/notestutorials). Perhaps something simple like "Tutorial 1  Rasch model multiple choice test"? If not, can you upload your data and syntax?

The version is 4.14.2.
I have thought about this and I think the problem lies in the study design and the casesconstraint needed in the GPCM analysis. The study follows patients undergoing a certain surgical procedure, and the data come from symptom severity questionnaires that the patients have taken preop and postop. When analyzing with the PCM, I could calibrate items (and persons) based on preopdata and then use these item parameters when estimating symptom severity in the postop data. This doesn't work very well with the GPCM, because the identification constraint must be placed on the persons/cases. So basically the program is forced to produce a latent trait distribution with mean 0 from a group that preop had a mean of 0 and now has far less symptoms (and this would then affect the postop reliability estimates). Does this reasoning makes sense to you?
What I did to avoid this problem was to estimate preop and postop data jointly (beacuse I am really only interested in the symptom severity). I don't know if this is a reasonable strategy theoretically, but it seems to work fine in practice. The only things that bother me now is that:
1) the EAP reliability estimates are 1.000, which makes me not trust the EAP estimation (so I also estimate with MLE and WLE), and
2) the infit and outfit values do not seem to be centered around 1. This was not the case when I ran the GPCM on preop and post data separately.
Any thoughts on these issues?

If you estimated a manyfactes model you could get a join calibration that would account for the prepost design, and you could still use the GPCM specification.
You would have (up to) two response vectors per case, and your model statement would look like:
model item + item*step + time ! scoresfree;
If you had enough data, you could also empirically assess whether items behave consistently over time:
model item + item*step + time + item*time ! scoresfree;
(your expectation is that item*time interactions are 0, and deviations from 0 are distributed chisquare with DF = 1).
That would give you a set of anchor values to use in further analysis. You could then drop the location constraint all together (In ConQuest 5, you could also drop the scale constraint) by anchoring the item parameters (both locations and discrimination in CQ 5 if you want to estimate the variaince). You then use those in a 2 dimensional model (pre and postop) to yield a mixedeffects model that would give you a fixed average growth between time and a random growth withinpersons (PVs), as well as unattenuated correlation estimates.
here's an example: https://research.acer.edu.au/rc2130/rc2021/papers/4/ (https://research.acer.edu.au/rc2130/rc2021/papers/4/)

Hi,
Many thanks for the suggestion! I added a time variable (0=preop; 1=postop) and ran the model item + step*item + time! scores free, but then, for some reason, Conquest finds a third time category (a 4 or an 8). I have checked the data (the time variable) over and over again, and I can only find 0's and 1's. Any thoughts on how to remedy this issue?

you'll have to upload your data and syntax for me to take a closer look  is that possible?
alternatively dan.cloneyATacerDOTorg.