Author Topic: contrasting results in ConQuest  (Read 777 times)

navid

  • Newbie
  • *
  • Posts: 7
    • View Profile
    • Email
contrasting results in ConQuest
« on: December 06, 2013, 08:23:47 PM »
As a follow-up to my previous post, we are doing a MRCMLM analysis using the ConQuest. The test has 70 items and six subtests. We have to test eight models and see which one has better fit: a unidimensional model, two between-item multidimensional models (with 5 and 6 dimensions), and five testlet models. The problem is that we get drastically different results each time we do the analysis. I know this may be partly due to the iteration that ConQuest goes through but what should we do?
Any advice is appreciated.

Navid

Eveline Gebhardt

  • Administrator
  • Full Member
  • *****
  • Posts: 103
    • View Profile
    • Email
Re: contrasting results in ConQuest
« Reply #1 on: December 08, 2013, 11:16:00 PM »
I what way are they different?

The item parameters are constrained to have an average difficulty of zero within each dimension. The easiest way to compare item parameter estimates is to make a scatterplot. You should see a straight line for each dimension.


navid

  • Newbie
  • *
  • Posts: 7
    • View Profile
    • Email
Re: contrasting results in ConQuest
« Reply #2 on: December 09, 2013, 12:44:31 PM »
Sorry I guess my post has been a bit vague.
We want to compare the relative fits of these models. Each time I run the ConQuest, I get a different deviance! Hence, sometimes the within-dimensional model has better fit but if I run the analyses again, because I get a different deviance and hence AIC and BIC indices, the testlet model may show better fit. That is, the estimates are not stable. Should I increase the number of nodes? How many nodes will be needed for stable estimation?

A related question is how should we compare the ability estimates across the models? Because ConQuest doesn't offer an overal ability estimate for examinees, how should I compare the ability estimates in between-item multidimensional model and those obtained in testlet?

Hope it makes sense this time!

Eveline Gebhardt

  • Administrator
  • Full Member
  • *****
  • Posts: 103
    • View Profile
    • Email
Re: contrasting results in ConQuest
« Reply #3 on: December 10, 2013, 09:14:06 PM »
Did the model converge? If not, increase the maximum number of iterations or increase the number of nodes. If it did, you could decrease the convergence criteria for the parameters or the deviance.

Sometimes ConQuest tells you that the final solution is not the best solution. In that case, you need to play a little with the same options. You can also change the seed in the set command.

With complex models, the deviance can be a little unstable indeed. Exporting the log file could help you find out how to make the model converge. If you like, you can email me your input and output files and I can have a look as well (gebhardt@acer.edu.au).

I am not sure what you mean with an overall ability. If you run a multi-dimensional model, students receive an ability estimate for each dimension.

navid

  • Newbie
  • *
  • Posts: 7
    • View Profile
    • Email
Re: contrasting results in ConQuest
« Reply #4 on: December 13, 2013, 01:56:50 PM »
Thank you Eveline,
I will try the options.
With respect to your last comment: "I am not sure what you mean with an overall ability. If you run a multi-dimensional model, students receive an ability estimate for each dimension.". Yes, I know a separate ability estimate is reported for each dimension but I read some papers where the ability estimates across models with different dimensionality design (testlets vs multidimensional ones) were compared. They couldn't have done the comparison for each separate dimension because the number of dimensions is not the same across models.
I once asked a well-known figure in Rasch modelling how we can get an overall ability estimate in multidimensional MRCMLM. He said "simply add the individual ability estimates on each dimension"! This doesn't make much sense to me because the ability estimates are not on the same scale. I thought there may be a more plausible option.

Eveline Gebhardt

  • Administrator
  • Full Member
  • *****
  • Posts: 103
    • View Profile
    • Email
Re: contrasting results in ConQuest
« Reply #5 on: December 17, 2013, 12:26:35 AM »
Hi Navid

In Ray Adams' view, you can't add across dimensions or compare across dimensions in a measurement paradigm, eg like adding some ones height and maximum running speed. This doesn't mean that it is sometimes reasonable to compare things normatively or create composite indicators, eg

weight plus height == size
or use percentiles, person is at 10th height percentile, but at 50th running percentile
or kid is at grade 4 level for reading but grade 5 maths level

Eveline