Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - dan_c

Pages: [1] 2 3 ... 8
could you email me the original Excel file so I can see what is going on with the export?

If you don't want to upload it here, you can email it to me (

I use VS code - you can download it here:

You can find instructions about hot to view a hexdump fo a file in VS code here:

It does look like the same issue exists in this file too - you can fix it by deleting the last CRLF and "0", and replacing it and saving the file. We see this a fair bit with MS Excel - the filetypes created in MS do not confirm to standards for text files (for example, we see BOM characters being appended to the start of files, and when exporting CSV there is no line feed for the last record). You could write an appropriate file using R, for example (the haven package writes SPSS files, the write.csv function writes CSVs, and the write_fwf function in the tidyverse library writes FWF).

There is something funky in the encoding of your data file. I would need to know more about how you created it to diagnose it.

I converted your data to csv and the file is read in correctly. Note that CSV support is new in ConQuest 5, along with other improvement including estimation speed and MCMC estimators

Note that if I look at a hexdump of your data file, it appears there is some extra symbol at then end of the last record. This might be a hint to help you diagnose the issue at your end. Here is the last 32 bits:

Code: [Select]
00000b40: 32 31 32 32 31 32 32 32 32 31 32 32 32 32 32 32    2122122221222222
00000b50: 32 31 32 32 32 32 30 32 30 32 32 31 32 32 0A       21222202022122.

If I remove that character, then your model runs fine also. I have attached both data files below.

Questions and Answers / Re: syntax help
« on: July 18, 2021, 05:39:21 AM »
of course.

Here is a good paper on specifying the bifactor model in ConQuest:

Brandt, S. (2008). Estimation of a Rasch model including subdimensions. IERI Monograph Series. Issues and Methodologies in Large-Scale Assessments, 1, 51–70.

Note there are several ways of specifying a bifactor model.

The simplest specification is one where you assume the items are measuring a set of dimensions with a particular structure.  Typically each item is reflective of a combination of a general dimension and a specific dimension.  Every item loads on the general dimension, and a subset of items load on each specific dimensions. The specific dimensions are orthogonal to the general dimension.

In ConQuest this would mean you would impose a covariance constraint: the covariance between D1 (your general dimension) and D2 to Dd is anchored at 0, where d is the number of specific dimensions.

Attached is a worked example.

Questions and Answers / Re: syntax help
« on: July 16, 2021, 02:18:32 AM »
I'm not sure it makes sense to model a single testlet - that will yield a bifactor model where all items load on one testlet factor and one trait factor.

perhaps you can share more about what you are trying to achieve? 

Questions and Answers / Re: syntax help
« on: July 12, 2021, 11:45:18 PM »
can you share your data?

If you have a column that indicates which passage/testlet each record belongs to then you simply:

Code: [Select]
model item; /* Rasch Model */
Code: [Select]
model item + passage; /* Facet Model - gives average difficulty of each passage/testlet */
Code: [Select]
model item + passage + item*passage; /* Facet Model - gives average difficulty of each passage/testlet plus an item-by-passage interaction */
You can evaluate model fit by considering nested models:

Thanks for reaching out, Jerred.

First of all, I would encourage you to upgrade to ConQuest 5. Version 4 is no longer supported and hasn't had any development, improvements or new features for several years.

The error you are getting is quite simple to fix: add a line feed (hit "enter") after the last column of your last record (see attached).

ConQuest implements a standard for fixed-width and delimited files that defines a record as starting at the beginning of a line and terminating with a line feed (can be either either CRLF or LF).

Code: [Select]
fit >> fit.txt
This exports the fit for each generalised item (i.e., the default design matrix). Each row is an item, and this is the same as the fit statistics shown in the "show file".

Fit statistics for item pairs, would show you the fit of these two items, relative to the rest of items in the model. This kind of analysis is typically used to interrogate whether pairs of items belong together (e.g., to assess dimensionality or dependence). This would look like:

Code: [Select]
fit 1-2:3-4 ... >> fit.txt
(that is, the fit of the sum of items 1 and 2 in the default design matrix, relative to the rest of the items)

More complex fit analysis can be done by importing a custom design matrix:

more complex

This appears to be the mean and variance of the population distribution.

This information is in the show file ( You can output this file, after estimation, as a text file, an excel file, or directly to the screen. You are looking for table 3: "Estimates for each of the parameters in the population model and reliability estimates.".

The mean of the population is given under Regression Coefficients ("CONSTANT"), and the variance is given int he unconditional covariance/correlation matrix ("Variance"). You will need to convert the latent variance to SD and SE (note the number of cases in the model is given in table 1, see "Cases in estimation")

Note if you have a complex population model (e.g., more than an intercept-only model), then it may be easier to use the command "descriptives":

"Descriptives" will directly report the mean, variance, SD for you (and the error associated with each). You can also calculate other statistics, like percentiles and classification above and below benchmarks.

Screenshot is attached.

You can return tables of item parameters (and their standard error) as well as generate a wrightmap with the command show:

For example, after estimation, typing and running "show;" will generate output to the terminal. I think you are interested in the following (see the options for "table"):

2. The estimates, errors and fit statistics for each of the parameters in the item response model.
4. A map of the latent distribution and the parameter estimates for each term in the item response model.
5. A vertical map of the latent distribution and threshold estimates for each generalised item.

Is this what you are looking for?

Questions and Answers / Re: Obtain Fit Statistics Like in Quest
« on: April 27, 2021, 11:17:55 AM »
Hi Juliane,

the output from the pfit does not include standard errors, and therefore also does not include t values.

It is relatively easy to add them in and I have put them on the work agenda for a future release of ConQuest.

Note that person fit statistics will generally be estimated relatively poorly (we tend to have fewer items per person, than vice-versa) and the t values may not be very useful. It may be more interesting to look at some visualisations - for example a qqplot of the quantiles of the pfits versus the quantiles of the std normal distribution. This will tell you if you have strong departure from the expectation.

Questions and Answers / Re: Obtain Fit Statistics Like in Quest
« on: April 12, 2021, 09:02:40 AM »
Hi Juliane,

in the show file you find the output from the population model which includes regression output and the latent variance-covariaince matrix (both unconditional, conditional and model R square if you include regressors in the population model). This is where you will find the mean and variance of the latent ability distribution. See e.g.:

Code: [Select]

Regression Variable

CONSTANT                   1.332 ( 0.115)
An asterisk next to a parameter estimate indicates that it is constrained



Variance                   0.625  ( 0.113)
An asterisk next to a parameter estimate indicates that it is constrained

The person fit statistics in pfit are weighted mean squares/infit statistics with expectation 1.

Questions and Answers / Re: Obtain Fit Statistics Like in Quest
« on: March 16, 2021, 10:18:05 AM »
check out the output of the commands show and fit.

tables =value list
If parameters output is requested, a total of eleven different tables can be produced.

The contents of the tables are:

2. The estimates, errors and fit statistics for each of the parameters in the item response model.
... Examples

Code: [Select]
show;Produces displays with default settings and writes them to the output window.

Produces residual-based fit statistics.
... Examples

Code: [Select]
fit >> fit.res;Uses the default fit design matrix and writes results to the file fit.res.

Hi Scherge,

The model you have specified:
Code: [Select]
item - book + item*book;
requests an item parameter for each item and book combination. That is, you have 8 levels of “book”, and therefore each traditional item is generalised to have a distinct item parameter for each book (in your case, the “book” main effect is the average shift in difficulty across the books, and each item*book effect represents the deviation from the main book effect for this item and book combination).

For this to be an identified model, there will need to be at least two valid responses for each item and book interaction (one in the zero category, and one in another category – this is because the identification constraint on the item*book effect is that the parameter for the last item within each book is the negative sum of the other parameters within the book. If you only have one valid response within that book then the parameter cannot be calculated (and in fact the design matrix for the model is mis-specified).

I have attached the itanal for the model:
Code: [Select]
item – book;You will see that you have only one response to book 1. This model is not identified. If you remove booklet 1, you can estimate this model (show file attached). Just note that you are not modelling the steps/categories within the items in this model. Each item score is simply multiplied by the person theta in the probability function and this is quite a restrictive specification.

Hi Scherge,

I think you are encountering issues because the latent correlation between your two dimensions is approaching 1. This is strong evidence of a unidimensional structure.

There are several ways you can get deviances and fit. The first is to not estimate standard errors - they are not required.

Code: [Select]
estimate ! stderr = none;
You can also try quick standard errors (assumes off diagonals of the variance-covariance matrix of the model parameters are 0). See the details in the manual:

Another way of considering dimensionality is user-defined residual fit statistics. You fit a 1D model and then test the hypothesis that the observed responses to items 1 and 4 (and alternatively 2 and 3) fit the expectation given the model (that is, this group of items, fit a 1D model). As with fit statistics for individual items, the expected value is 1. The fit command gives confidence intervals around the estimate. see

I have attached a show file and some fit analysis from a 1D run.

Pages: [1] 2 3 ... 8