Hi Shirley

The choice of your reliability estimate depends on the choice of ability estimates.

Plausible values are the only estimates that give unbiased population estimates if they are drawn from the correct model. You would need to run a conditional model that includes all your “other independent variables” that you will use in secondary analysis as regressors. Using only one plausible value for each student will also lead to unbiased population estimates, but if you’d like to estimate the measurement error you will need 5 plausible values. However, plausible values are biased at the individual level, you can only use them to make inferences to the population.

If you do not have a defined population, you may prefer to use WLEs. Each WLE value corresponds to one raw score. They are easiest to estimate and they are unbiased at the individual level. However, they can lead to biased population estimates (variance can be overestimated and unlike PVs, they will be affected by floor and ceiling effects).

EAPs are almost the same as the average of the five PVs. You will need to include all your other variables of interest as regressors in the conditioning model. They are biased at the individual level and may give biased estimates for the population. If you have a large sample, enough items and good reliabilities, it is probably better to use one PV instead of the EAP.

Regarding the Wright map, I can’t tell without looking at the show file. If you email it to me (gebhardt@acer.edu.au), I can have a look. Please make sure you have the latest version of ConQuest. The variance printed in the matrix is the latent variance (estimated before any individual abilities were estimated). The plot is based on the estimates for the individual students (WLEs, I assume). It is possible that a mismatch is an indication of a bias in the WLEs. Try to make a map that uses PVs (show !estimates=latent) and compare this map with the one you have now.

Regarding your last question, please have a look at RELIABILITY AS A MEASUREMENT DESIGN EFFECT by Raymond J. Adams (2005) in Studies in Educational Evaluation 31 (2005) 162-172. (I can send you a copy if you email me).

Best wishes

Eveline