Title: | Data Management and Analysis of Tests |
---|---|
Description: | A system for the management, assessment, and psychometric analysis of data from educational and psychological tests. |
Authors: | Gunter Maris [aut], Timo Bechger [aut], Jesse Koops [aut, cre], Ivailo Partchev [aut] |
Maintainer: | Jesse Koops <[email protected]> |
License: | LGPL-3 |
Version: | 1.5.0 |
Built: | 2024-11-03 06:44:24 UTC |
Source: | CRAN |
Dexter provides a comprehensive solution for managing and analyzing educational test data.
The main features are:
project databases providing a structure for storing data about persons, items, responses and booklets.
methods to assess data quality using Classical test theory and plots.
CML calibration of the extended nominal response model and interaction model.
To learn more about dexter, start with the vignettes: 'browseVignettes(package="dexter")'
Dexter uses the following global options
'dexter.use_tibble' return tibbles instead of data.frames, defaults to FALSE
'dexter.progress' show progress bars, defaults to TRUE in interactive sessions
'dexter.max_cores' set a maximum number of cores that dexter will use, defaults to the minimum of 'Sys.getenv("OMP_THREAD_LIMIT")' and 'getOption("Ncpus")', otherwise unlimited.
Maintainer: Jesse Koops [email protected]
Authors:
Gunter Maris
Timo Bechger
Ivailo Partchev
Useful links:
Report bugs at https://github.com/dexter-psychometrics/dexter/issues
Computes estimates of ability for persons or for booklet scores
ability( dataSrc, parms, predicate = NULL, method = c("MLE", "EAP", "WLE"), prior = c("normal", "Jeffreys"), parms_draw = "average", mu = 0, sigma = 4, merge_within_persons = FALSE ) ability_tables( parms, design = NULL, method = c("MLE", "EAP", "WLE"), prior = c("normal", "Jeffreys"), parms_draw = "average", mu = 0, sigma = 4 )
ability( dataSrc, parms, predicate = NULL, method = c("MLE", "EAP", "WLE"), prior = c("normal", "Jeffreys"), parms_draw = "average", mu = 0, sigma = 4, merge_within_persons = FALSE ) ability_tables( parms, design = NULL, method = c("MLE", "EAP", "WLE"), prior = c("normal", "Jeffreys"), parms_draw = "average", mu = 0, sigma = 4 )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
parms |
object produced by |
predicate |
An optional expression to subset data, if NULL all data is used |
method |
Maximum Likelihood (MLE), Expected A posteriori (EAP) or Weighted Likelihood (WLE) |
prior |
If an EAP estimate is produced one can choose a normal prior or Jeffreys prior; i.e., a prior proportional to the square root of test information. |
parms_draw |
When parms is Bayesian, parms_draw can be the index of the posterior sample of the item parameters that will be used for generating abilities. If parms_draw='average', the posterior mean is used. |
mu |
Mean of the normal prior |
sigma |
Standard deviation of the normal prior |
merge_within_persons |
for persons who were administered multiple booklets, whether to provide just one ability value (TRUE) or one per booklet(FALSE) |
design |
A data.frame with columns item_id and optionally booklet_id. If the column booklet_id is not included, the score transformation table will be based on all items found in the design. If design is NULL and parms is an enorm fit object the score transformation table will be computed based on the test design that was used to fit the items. |
MLE estimates of ability will produce -Inf and Inf estimates for the minimum (=0) and the maximum score on a booklet. If this is undesirable, we advise to use WLE. The WLE was proposed by Warm (1989) to reduce bias in the MLE and is also known as the Warm estimator.
a data.frame with columns: booklet_id, person_id, booklet_score, theta and optionally se (standard error)
a data.frame with columns: booklet_id, booklet_score, theta and optionally se (standard error)
Warm, T. A. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika, 54(3), 427-450.
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") f = fit_enorm(db) mle = ability_tables(f, method="MLE") eap = ability_tables(f, method="EAP", mu=0, sigma=1) wle = ability_tables(f, method="WLE") plot(wle$booklet_score, wle$theta, xlab="test-score", ylab="ability est.", pch=19) points(mle$booklet_score, mle$theta, col="red", pch=19,) points(eap$booklet_score, eap$theta, col="blue", pch=19) legend("topleft", legend = c("WLE", "MLE", "EAP N(0,1)"), col = c("black", "red", "blue"), bty = "n",pch = 19) close_project(db)
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") f = fit_enorm(db) mle = ability_tables(f, method="MLE") eap = ability_tables(f, method="EAP", mu=0, sigma=1) wle = ability_tables(f, method="WLE") plot(wle$booklet_score, wle$theta, xlab="test-score", ylab="ability est.", pch=19) points(mle$booklet_score, mle$theta, col="red", pch=19,) points(eap$booklet_score, eap$theta, col="blue", pch=19) legend("topleft", legend = c("WLE", "MLE", "EAP N(0,1)"), col = c("black", "red", "blue"), bty = "n",pch = 19) close_project(db)
Add item response data in long or wide format.
add_booklet(db, x, booklet_id, auto_add_unknown_rules = FALSE) add_response_data( db, data, design = NULL, missing_value = "NA", auto_add_unknown_rules = FALSE )
add_booklet(db, x, booklet_id, auto_add_unknown_rules = FALSE) add_response_data( db, data, design = NULL, missing_value = "NA", auto_add_unknown_rules = FALSE )
db |
a connection to a dexter database, i.e. the output of |
x |
A data frame containing the responses and, optionally, person_properties. The data.frame should have one row per respondent and the column names should correspond to the item_id's in the rules or the names of the person_properties. See details. |
booklet_id |
A (short) string identifying the test form (booklet) |
auto_add_unknown_rules |
If FALSE (the default), an error will be generated if one or more responses do not appear in the scoring rules. If TRUE, unknown responses will be assumed to have a score of 0 and will be added to your scoring rules |
data |
response data in normalized (long) format. Must contain columns |
design |
data.frame with columns booklet_id, item_id and optionally item_position specifying the design of any _new_ booklets in your data. |
missing_value |
value to use for responses in missing rows in your data, see details |
It is a common practice to keep response data in tables where each row
contains the responses from a single person. add_booklet
is provided to input
data in that form, one booklet at a time.
If the dataframe x
contains a variable named person_id
this variable
will be used to identify unique persons. It is assumed that a single person will only
make a single booklet once, otherwise an error will be generated.
If a person_id is not supplied, dexter will generate unique person_id's for each row of data.
Any column whose name has an exact match in the scoring rules inputted with
function start_new_project
will be treated as an item; any column whose name has an
exact match in the person_properties will be treated as a person property. If a name matches both
a person_property and an item_id, the item takes precedence. Columns other than items, person properties
and person_id will be ignored.
add_response_data
can be used to add data that is already normalized. This function takes a
data.frame in long format with columns person_id
, booklet_id
,
item_id
and response
such as can usually be found in databases for example.
For booklets that are not already known in your project, you need to specify the design via the design
argument.
Failure to do so will result in an error. Responses to items that should be there according to the design but which do not have a corresponding
row in data
will be added with missing_value
used for the response. If this missing value is not defined in your scoring rules
and auto_add_unknown_rules
is set to FALSE, this will lead to an error message.
Note that responses are always treated as strings (in both functions), and NA
values are transformed to the string "NA"
.
A list with information about the recent import.
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="unknown")) head(verbAggrData) add_booklet(db, verbAggrData, "agg") close_project(db)
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="unknown")) head(verbAggrData) add_booklet(db, verbAggrData, "agg") close_project(db)
Add, change or define item properties in a dexter project
add_item_properties(db, item_properties = NULL, default_values = NULL)
add_item_properties(db, item_properties = NULL, default_values = NULL)
db |
a connection to a dexter database, e.g. the output of |
item_properties |
A data frame containing a column item_id (matching item_id's already defined in the project) and 1 or more other columns with item properties (e.g. item_type, subject) |
default_values |
a list where the names are item_properties and the values are defaults. The defaults will be used wherever the item property is unknown. |
When entering response data in the form of a rectangular person x item table, it is easy to provide person properties but practically impossible to provide item properties. This function provides a possibility to do so.
Note that is is not possible to add new items with this function,
use touch_rules
if you want to add new items to your project.
nothing
fit_domains
, profile_plot
for
possible uses of item_properties
## Not run: \donttest{ db = start_new_project(verbAggrRules, "verbAggression.db") head(verbAggrProperties) add_item_properties(db, verbAggrProperties) get_items(db) close_project(db) } ## End(Not run)
## Not run: \donttest{ db = start_new_project(verbAggrRules, "verbAggression.db") head(verbAggrProperties) add_item_properties(db, verbAggrProperties) get_items(db) close_project(db) } ## End(Not run)
Add, change or define person properties in a dexter project. Person properties defined here will
also be automatically imported with add_booklet
add_person_properties(db, person_properties = NULL, default_values = NULL)
add_person_properties(db, person_properties = NULL, default_values = NULL)
db |
a connection to a dexter database, e.g. the output of |
person_properties |
A data frame containing a column person_id and 1 or more other columns with person properties (e.g. education_type, birthdate) |
default_values |
a list where the names are person_properties and the values are defaults. The defaults will be used wherever the person property is unknown. |
Due to limitations in the sqlite database backend that we use, the default values for a person property can only be defined once for each person_property
nothing
This is just an alias for DBI::dbDisconnect(db)
, included for completeness
close_project(db)
close_project(db)
db |
connection to a dexter database |
extract equating information
## S3 method for class 'p2pass' coef(object, ...)
## S3 method for class 'p2pass' coef(object, ...)
object |
an p2pass object, generated by |
... |
further arguments are currently ignored |
A data.frame with columns:
id of the target booklet
score on the target booklet
probability to pass on the reference test given score_new
proportion that correctly passes
The proportion of positives that are correctly identified as such
The proportion of negatives that are correctly identified as such
proportion in sample with score_new
extract enorm item parameters
## S3 method for class 'prms' coef(object, hpd = 0.95, what = c("items", "var", "posterior"), ...)
## S3 method for class 'prms' coef(object, hpd = 0.95, what = c("items", "var", "posterior"), ...)
object |
an enorm parameters object, generated by the function |
hpd |
width of Bayesian highest posterior density interval around mean_beta, value must be between 0 and 1, default is 0.95 |
what |
which coefficients to return. Defaults to |
... |
further arguments to coef are ignored |
The parametrisation of IRT models is far from uniform and depends on the author. Dexter uses the following parametrisation for the extended Nominal Response Model (NRM):
where is a shorthand for the integer score belonging to the j-th category of an item.
For dichotomous items with (i.e. the only possible scores are 0 and 1)
this formula simplifies to the standard Rasch model:
. For polytomous items,
when all scores are equal to the categories (i.e.
for all
)
the NRM is equal to the Partial Credit Model, although with a different parametrisation than is commonly used.
For dichotomous items and for all polytomous items where
is constant, the formulation is equal to the OPLM.
Depends on the calibration method and the value of 'what'. For what="items"
:
a data.frame with columns: item_id, item_score, beta, SE_beta
a data.frame with columns: item_id, item_score, mean_beta, SD_beta, <hpd_b_left>, <hpd_b_right>
If what="var"
or what="posterior"
then a matrix is returned with the variance-covariance matrix or the posterior draws
respectively.
This function is useful to inspect incomplete designs
design_info(dataSrc, predicate = NULL)
design_info(dataSrc, predicate = NULL)
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to subset data, if NULL all data is used |
a list with the following components
a data.frame with columns booklet_id, item_id, item_position, n_persons
a data.frame with columns booklet_id, group; booklets with the same 'group' are connected to each other.
TRUE/FALSE indicating whether the design is connected or not
a data.frame with columns item_id and testlet; items within the same testlet always occur together in a booklet
list of two adjacency matrices: *weighted_by_items* and *weighted_by_persons*; These matrices can be useful in visually inspecting the design using a package like *igraph*
Exploratory test for Differential Item Functioning
DIF(dataSrc, person_property, predicate = NULL)
DIF(dataSrc, person_property, predicate = NULL)
dataSrc |
a connection to a dexter database or a data.frame with columns: person_id, item_id, item_score |
person_property |
Defines groups of persons to calculate DIF |
predicate |
An optional expression to subset data, if NULL all data is used |
Tests for equality of relative item/category difficulties across groups. Supplements the confirmatory approach of the profile plot.
An object of class DIF_stats
holding statistics for
overall-DIF and a matrix of statistics for DIF in the relative position of
item-category parameters in the beta-parameterization where they represent
locations on the ability scale where adjacent categories are equally likely.
If there is DIF, the function 'plot' can be used to produce an image of the pairwise DIF statistics.
Bechger, T. M. and Maris, G (2015); A Statistical Test for Differential Item Pair Functioning. Psychometrika. Vol. 80, no. 2, 317-340.
A plot of the result is produced by the function plot.DIF_stats
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender='unknown')) add_booklet(db, verbAggrData, "agg") dd = DIF(db,person_property="gender") print(dd) plot(dd) str(dd) close_project(db)
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender='unknown')) add_booklet(db, verbAggrData, "agg") dd = DIF(db,person_property="gender") print(dd) plot(dd) str(dd) close_project(db)
Produce a diagnostic distractor plot for an item
distractor_plot( dataSrc, item_id, predicate = NULL, legend = TRUE, curtains = 10, adjust = 1, col = NULL, ... )
distractor_plot( dataSrc, item_id, predicate = NULL, legend = TRUE, curtains = 10, adjust = 1, col = NULL, ... )
dataSrc |
a connection to a dexter database or a data.frame with columns: person_id, item_id, response, item_score and optionally booklet_id |
item_id |
The ID of the item to plot. A separate plot will be produced for each booklet that contains the item, or an error message if the item_id is not known. Each plot contains a non-parametric regression of each possible response on the total score. |
predicate |
An optional expression to subset data, if NULL all data is used |
legend |
logical, whether to include the legend. default is TRUE |
curtains |
100*the tail probability of the sum scores to be shaded. Default is 10. Set to 0 to have no curtains shown at all. |
adjust |
factor to adjust the smoothing bandwidth respective to the default value |
col |
vector of colors to use for plotting. The names of the vector can be responses. If the vector is not named, colors are assigned to the most frequent responses first. |
... |
further arguments to plot. |
Customization of title and subtitle can be done by using the arguments main and sub.
These arguments can contain references to the variables item_id, booklet_id, item_position(if available),
pvalue, rit and rir. References are made by prefixing these variables with a dollar sign. Variable names may be postfixed
with a sprintf style format string, e.g.
distractor_plot(db, main='item: $item_id', sub='Item rest correlation: $rir:.2f')
Silently, a data.frame of response categories and colors used. Potentially useful if you want to customize the legend or print it separately
Estimate the parameters of the Rasch model and the Interaction model
fit_domains(dataSrc, item_property, predicate = NULL)
fit_domains(dataSrc, item_property, predicate = NULL)
dataSrc |
a connection to a dexter database or a data.frame with columns: person_id, item_id, item_score |
item_property |
The item property defining the domains (subtests) |
predicate |
An optional expression to subset data, if NULL all data is used |
We have generalised the interaction model for items having more than two (potentially, a largish number) of response categories. This function represents scores on subtests as super-items and analyses these as normal items.
An object of class imp
holding results
for the Rasch model and the interaction model.
plot.rim
, fit_inter
, add_item_properties
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) mSit = fit_domains(db, item_property= "situation") plot(mSit) close_project(db)
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) mSit = fit_domains(db, item_property= "situation") plot(mSit) close_project(db)
Fits an Extended NOminal Response Model (ENORM) using conditional maximum likelihood (CML) or a Gibbs sampler for Bayesian estimation.
fit_enorm( dataSrc, predicate = NULL, fixed_params = NULL, method = c("CML", "Bayes"), nDraws = 1000, merge_within_persons = FALSE )
fit_enorm( dataSrc, predicate = NULL, fixed_params = NULL, method = c("CML", "Bayes"), nDraws = 1000, merge_within_persons = FALSE )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to subset data, if NULL all data is used |
fixed_params |
Optionally, a prms object from a previous analysis or a data.frame with parameters, see details. |
method |
If CML, the estimation method will be Conditional Maximum Likelihood; otherwise, a Gibbs sampler will be used to produce a sample from the posterior |
nDraws |
Number of Gibbs samples when estimation method is Bayes. |
merge_within_persons |
whether to merge different booklets administered to the same person, enabling linking over persons as well as booklets. |
To support some flexibility in fixing parameters, fixed_params can be a dexter prms object or a data.frame. If a data.frame, it should contain the columns item_id, item_score and a difficulty parameter. Three types of parameters are supported:
thresholds between subsequent item categories
item-category parameters
exp(-eta)
Each type corresponds to a different parametrization of the model.
An object of type prms
. The prms object can be cast to a data.frame of item parameters
using function coef
or used directly as input for other Dexter functions.
Maris, G., Bechger, T.M. and San-Martin, E. (2015) A Gibbs sampler for the (extended) marginal Rasch model. Psychometrika. 80(4), 859-879.
Koops, J. and Bechger, T.M. and Maris, G. (in press); Bayesian inference for multistage and other incomplete designs. In Research for Practical Issues and Solutions in Computerized Multistage Testing. Routledge, London.
functions that accept a prms object as input: ability
, plausible_values
,
plot.prms
, and plausible_scores
Estimate the parameters of the Interaction model and the Rasch model
fit_inter(dataSrc, predicate = NULL)
fit_inter(dataSrc, predicate = NULL)
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to subset data, if NULL all data is used |
Unlike the Rasch model, the interaction model cannot be computed concurrently for a whole design of test forms. This function therefore fits the Rasch model and the interaction model on complete data. This typically consist of responses to items in one booklet but can also consist of the intersection (common items) in two or more booklets. If the intersection is empty (no common items for all persons), the function will exit with an error message.
An object of class rim
holding results
for the Rasch model and the interaction model.
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") m = fit_inter(db, booklet_id=='agg') plot(m, "S1DoScold", show.observed=TRUE) close_project(db)
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") m = fit_inter(db, booklet_id=='agg') plot(m, "S1DoScold", show.observed=TRUE) close_project(db)
Retrieve information about the booklets entered in the db so far
get_booklets(db)
get_booklets(db)
db |
a connection to a dexter database, i.e. the output of |
A data frame with columns: booklet_id, n_persons, n_items and booklet_max_score. booklet_max_score gives the maximum theoretically possible score according to the scoring rules
Retrieve all items that have been entered in the db so far by booklet and position in the booklet
get_design( dataSrc, format = c("long", "wide"), rows = c("booklet_id", "item_id", "item_position"), columns = c("item_id", "booklet_id", "item_position"), fill = NA )
get_design( dataSrc, format = c("long", "wide"), rows = c("booklet_id", "item_id", "item_position"), columns = c("item_id", "booklet_id", "item_position"), fill = NA )
dataSrc |
a dexter database or any object form which a design can be inferred |
format |
return format, see below |
rows |
variable that defines the rows, ignored if format='long' |
columns |
variable that defines the columns, ignored if format='long' |
fill |
If set, missing values will be replaced with this value, ignored if format='long' |
A data.frame with the design. The contents depend on the rows, columns and format parameters
if format
is 'long'
a data.frame with columns: booklet_id, item_id, item_position (if available)
if format
is 'wide'
a data.frame with the rows defined by the rows
parameter and
the columns by the columns
parameter, with the remaining variable (i.e. item_id, booklet_id or item_position)
making up the cells
Retrieve all items that have been entered in the db so far together with the item properties
get_items(db)
get_items(db)
db |
a connection to a dexter database, e.g. the output of |
A data frame with column item_id and a column for each item property
Retrieve all persons/respondents that have been entered in the db so far together with their properties
get_persons(db)
get_persons(db)
db |
a connection to a dexter database, e.g. the output of |
A data frame with columns person_id and columns for each person_property
These functions are meant for people who want to develop their own models based
on the data management structure of dexter. The benefit is some extra speed and less memory usage
compared to using get_responses
or get_testscores
.
The return value of get_resp_data can be used as the 'dataSrc' argument in analysis functions.
get_resp_data( dataSrc, qtpredicate = NULL, extra_columns = NULL, summarised = FALSE, env = NULL, protect_x = TRUE, retain_person_id = TRUE, merge_within_persons = FALSE, parms_check = NULL, raw = FALSE ) get_resp_matrix(dataSrc, qtpredicate = NULL, env = NULL)
get_resp_data( dataSrc, qtpredicate = NULL, extra_columns = NULL, summarised = FALSE, env = NULL, protect_x = TRUE, retain_person_id = TRUE, merge_within_persons = FALSE, parms_check = NULL, raw = FALSE ) get_resp_matrix(dataSrc, qtpredicate = NULL, env = NULL)
dataSrc |
data.frame, integer matrix, dexter database or 'dx_resp_data' object |
qtpredicate |
quoted predicate, e.g. |
extra_columns |
to be returned in addition to person_id, booklet_id, item_score, item_id |
summarised |
if TRUE, no item scores are returned, just booklet scores |
env |
environment for evaluation of qtpredicate, defaults to caller environment |
protect_x |
best set TRUE (default) |
retain_person_id |
whether to retain the original person_id levels or just use arbitrary integers |
merge_within_persons |
merge different booklets for the same person together |
parms_check |
data.frame of item_id, item_score to check for coverage of data |
raw |
if raw is TRUE, no sum scores, booklets, or design is provided and arguments, 'parms_check' and 'summarised' are ignored |
Regular users are advised not to use these functions as incorrect use can crash your R-session or lead to unexpected results.
returns a list with class 'dx_resp_data' with elements
when summarised is FALSE, a tibble(person_id, booklet_id, item_id, item_score, booklet_score [, extra_columns]), sorted in such a way that all rows pertaining to the same person-booklet are together
when summarised is TRUE, a tibble(person_id, booklet_id, booklet_score [, extra_columns])
tibble(booklet_id, item_id), sorted
returns a matrix of item scores as commonly used in other IRT packages, facilitating easy connection of your own package to the data management capabilities of dexter
Extract data from a dexter database
get_responses( dataSrc, predicate = NULL, columns = c("person_id", "item_id", "item_score") )
get_responses( dataSrc, predicate = NULL, columns = c("person_id", "item_id", "item_score") )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
an expression to select data on |
columns |
the columns you wish to select, can include any column in the project, see: |
Many functions in Dexter accept a data source and a predicate. Predicates are extremely flexible but they have a few limitations because they work on the individual response level. It is therefore not possible for example, to remove complete person cases from an analysis based on responses to a single item by using just a predicate expression.
For such cases, Dexter supports selecting the data and manipulating it before passing it back to a Dexter function or possibly doing something else with it. The following example will hopefully clarify this.
a data.frame of responses
## Not run: # goal: fit the extended nominal response model using only persons # without any missing responses library(dplyr) # the following would not work since it will omit only the missing # responses, not the persons; which is not what we want in this case wrong = fit_enorm(db, response != 'NA') # to select on an aggregate level, we need to gather the data and # manipulate it ourselves data = get_responses(db, columns=c('person_id','item_id','item_score','response')) |> group_by(person_id) |> mutate(any_missing = any(response=='NA')) |> filter(!any_missing) correct = fit_enorm(data) ## End(Not run)
## Not run: # goal: fit the extended nominal response model using only persons # without any missing responses library(dplyr) # the following would not work since it will omit only the missing # responses, not the persons; which is not what we want in this case wrong = fit_enorm(db, response != 'NA') # to select on an aggregate level, we need to gather the data and # manipulate it ourselves data = get_responses(db, columns=c('person_id','item_id','item_score','response')) |> group_by(person_id) |> mutate(any_missing = any(response=='NA')) |> filter(!any_missing) correct = fit_enorm(data) ## End(Not run)
Retrieve the scoring rules currently present in the dexter project db
get_rules(db)
get_rules(db)
db |
a connection to a Dexter database |
data.frame of scoring rules containing columns: item_id, response, item_score
Supplies the sum of item scores for each person selected.
get_testscores(dataSrc, predicate = NULL)
get_testscores(dataSrc, predicate = NULL)
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to filter data, if NULL all data is used |
A tibble with columns person_id, item_id, booklet_score
Inspect the variables defined in your dexter project and their datatypes
get_variables(db)
get_variables(db)
db |
a dexter project database |
The variables in Dexter consist of the item properties and person properties you specified
and a number of reserved variables that are automatically defined like response
and booklet_id
.
Variables in Dexter are most useful when used in predicate expressions. A number of functions can take a dataSrc argument and an optional predicate. Predicates are a concise and flexible way to filter data for the different psychometric functions in Dexter.
The variables can also be used to retrieve data in get_responses
a data.frame with name and type of the variables defined in your dexter project
Test individual differences
individual_differences(dataSrc, predicate = NULL)
individual_differences(dataSrc, predicate = NULL)
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to subset data, if NULL all data are used. |
This function uses a score distribution to test whether there are individual differences in ability. First, it estimates ability based on the score distribution. Then, the observed distribution is compared to the one expected from the single estimated ability. The data are typically from one booklet but can also consist of the intersection (i.e., the common items) of two or more booklets. If the intersection is empty (i.e., no common items for all persons), the function will exit with an error message.
An object of type tind. Printing the object will show test results. Plotting it will produce a plot of expected and observed score frequencies. The former under the hypothesis that there are no individual differences.
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") dd = individual_differences(db) print(dd) plot(dd) close_project(db)
db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") dd = individual_differences(db) print(dd) plot(dd) close_project(db)
returns information function, expected score function, score simulation function, or score distribution for a single item, an arbitrary group of items or all items
information( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) expected_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) r_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) p_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") )
information( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) expected_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) r_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") ) p_score( parms, items = NULL, booklet_id = NULL, parms_draw = c("average", "sample") )
parms |
object produced by |
items |
vector of one or more item_id's. If NULL and booklet_id is also NULL, all items in parms are used |
booklet_id |
id of a single booklet (e.g. the test information function), if items is not NULL this is ignored |
parms_draw |
when the item parameters are estimated with method "Bayes" (see: |
Each function returns a new function which accepts a vector of theta's. These return the following values:
an equal length vector with the information estimate at each value of theta.
an equal length vector with the expected score at each value of theta
a matrix with length(theta) rows and one column for each item containing simulated scores based on theta. To obtain test scores, use rowSums on this matrix
a matrix with length(theta) rows and one column for each possible sumscore containing the probability of the score given theta
db = start_new_project(verbAggrRules,':memory:') add_booklet(db,verbAggrData, "agg") p = fit_enorm(db) # plot information function for single item ifun = information(p, "S1DoScold") plot(ifun,from=-4,to=4) # compare test information function to the population ability distribution ifun = information(p, booklet="agg") pv = plausible_values(db,p) op = par(no.readonly=TRUE) par(mar = c(5,4,2,4)) plot(ifun,from=-4,to=4, xlab='theta', ylab='test information') par(new=TRUE) plot(density(pv$PV1), col='green', axes=FALSE, xlab=NA, ylab=NA, main=NA) axis(side=4) mtext(side = 4, line = 2.5, 'population density (green)') par(op) close_project(db)
db = start_new_project(verbAggrRules,':memory:') add_booklet(db,verbAggrData, "agg") p = fit_enorm(db) # plot information function for single item ifun = information(p, "S1DoScold") plot(ifun,from=-4,to=4) # compare test information function to the population ability distribution ifun = information(p, booklet="agg") pv = plausible_values(db,p) op = par(no.readonly=TRUE) par(mar = c(5,4,2,4)) plot(ifun,from=-4,to=4, xlab='theta', ylab='test information') par(new=TRUE) plot(density(pv$PV1), col='green', axes=FALSE, xlab=NA, ylab=NA, main=NA) axis(side=4) mtext(side = 4, line = 2.5, 'population density (green)') par(op) close_project(db)
For multiple choice items that will be scored as 0/1, derive the scoring rules from the keys to the correct responses
keys_to_rules(keys, include_NA_rule = FALSE)
keys_to_rules(keys, include_NA_rule = FALSE)
keys |
A data frame containing columns |
include_NA_rule |
whether to add an option 'NA' (which is scored 0) to each item |
This function might be useful in setting up the scoring rules when all items are multiple-choice and scored as 0/1.
The input data frame must contain the exact id of each item, the number of options, and the key. If the keys are all integers, it will be assumed that responses are coded as 1 through noptions. If they are all letters, it is assumed that responses are coded as A,B,C,... All other cases result in an error.
A data frame that can be used as input to start_new_project
Estimates correlations between latent traits using plausible values as described in Marsman, et al. (2022). An item_property is used to distinguish the different scales.
latent_cor( dataSrc, item_property, predicate = NULL, nDraws = 500, hpd = 0.95, use = "complete.obs" )
latent_cor( dataSrc, item_property, predicate = NULL, nDraws = 500, hpd = 0.95, use = "complete.obs" )
dataSrc |
A connection to a dexter database or a data.frame with columns: person_id, item_id, item_score and the item_property |
item_property |
The name of the item property used to define the domains. If |
predicate |
An optional expression to subset data, if NULL all data is used |
nDraws |
Number of draws for plausible values |
hpd |
width of Bayesian highest posterior density interval around the correlations, value must be between 0 and 1. |
use |
Only complete.obs at this time. Respondents who don't have a score for one or more scales are removed. |
This function uses plausible values so results may differ slightly between calls.
List containing a estimated correlation matrix, the corresponding standard deviations, and the lower and upper limits of the highest posterior density interval
Marsman, M., Bechger, T. M., & Maris, G. K. (2022). Composition algorithms for conditional distributions. In Essays on Contemporary Psychometrics (pp. 219-250). Cham: Springer International Publishing.
Opens a database created by function start_new_project
open_project(db_name = "dexter.db")
open_project(db_name = "dexter.db")
db_name |
The name of the database to be opened. |
a database connection object
Draw plausible, i.e. posterior predictive sumscores on a set of items.
plausible_scores( dataSrc, parms = NULL, predicate = NULL, items = NULL, parms_draw = c("sample", "average"), covariates = NULL, nPS = 1, prior_dist = c("normal", "mixture"), keep.observed = TRUE, by_item = FALSE, merge_within_persons = FALSE )
plausible_scores( dataSrc, parms = NULL, predicate = NULL, items = NULL, parms_draw = c("sample", "average"), covariates = NULL, nPS = 1, prior_dist = c("normal", "mixture"), keep.observed = TRUE, by_item = FALSE, merge_within_persons = FALSE )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
parms |
An object returned by function |
predicate |
an expression to filter data. If missing, the function will use all data in dataSrc |
items |
vector of item_id's, this specifies the itemset to generate the testscores for. If |
parms_draw |
when the item parameters are estimated Bayesianly (see: |
covariates |
name or a vector of names of the variables to group the population, used to update the prior. A covariate must be a discrete person covariate that indicates nominal categories, e.g. gender or school If dataSrc is a data.frame, it must contain the covariate. |
nPS |
Number of plausible testscores to generate per person. |
prior_dist |
use a normal prior for the plausible values or a mixture of two normals. A mixture is only possible when there are no covariates. |
keep.observed |
If responses to one or more of the items have been observed, the user can choose to keep these observations or generate new ones. |
by_item |
return scores per item instead of sumscores |
merge_within_persons |
If a person took multiple booklets, this indicates whether plausible scores are generated per person (TRUE) or per booklet (FALSE) |
A typical use of this function is to generate plausible scores on a complete item bank when data is collected using an incomplete design
A data.frame with columns booklet_id, person_id, booklet_score and nPS plausible scores named PS1...PSn.
Draws plausible values based on test scores
plausible_values( dataSrc, parms = NULL, predicate = NULL, covariates = NULL, nPV = 1, parms_draw = c("sample", "average"), prior_dist = c("normal", "mixture"), merge_within_persons = FALSE )
plausible_values( dataSrc, parms = NULL, predicate = NULL, covariates = NULL, nPV = 1, parms_draw = c("sample", "average"), prior_dist = c("normal", "mixture"), merge_within_persons = FALSE )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
parms |
An object returned by function |
predicate |
an expression to filter data. If missing, the function will use all data in dataSrc |
covariates |
name or a vector of names of the variables to group the populations used to improve the prior. A covariate must be a discrete person property (e.g. not a float) that indicates nominal categories, e.g. gender or school. If dataSrc is a data.frame, it must contain the covariate. |
nPV |
Number of plausible values to draw per person. |
parms_draw |
when the item parameters are estimated with method "Bayes" (see: |
prior_dist |
use a normal prior for the plausible values or a mixture of two normals. A mixture is only possible when there are no covariates. |
merge_within_persons |
If a person took multiple booklets, this indicates whether plausible values are generated per person (TRUE) or per booklet (FALSE) |
When the item parameters are estimated using fit_enorm(..., method='Bayes')
and parms_draw = 'sample', the uncertainty
of the item parameters estimates is taken into account when drawing multiple plausible values.
In there are covariates, the prior distribution is a hierarchical normal with equal variances across groups. When there is only one group this becomes a regular normal distribution. When there are no covariates and prior_dist = "mixture", the prior is a mixture distribution of two normal distributions which gives a little more flexibility than a normal prior.
A data.frame with columns booklet_id, person_id, booklet_score, any covariate columns, and nPV plausible values named PV1...PVn.
Marsman, M., Maris, G., Bechger, T. M., and Glas, C.A.C. (2016) What can we learn from plausible values? Psychometrika. 2016; 81: 274-289. See also the vignette.
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="<unknown>")) add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) f=fit_enorm(db) pv_M=plausible_values(db,f,(mode=="Do")&(gender=="Male")) pv_F=plausible_values(db,f,(mode=="Do")&(gender=="Female")) par(mfrow=c(1,2)) plot(ecdf(pv_M$PV1), main="Do: males versus females", xlab="Ability", col="red") lines(ecdf(pv_F$PV1), col="green") legend(-2.2,0.9, c("female", "male") , lty=1, col=c('green', 'red'), bty='n', cex=.75) pv_M=plausible_values(db,f,(mode=="Want")&(gender=="Male")) pv_F=plausible_values(db,f,(mode=="Want")&(gender=="Female")) plot(ecdf(pv_M$PV1), main="Want: males versus females", xlab=" Ability", col="red") lines(ecdf(pv_F$PV1),col="green") legend(-2.2,0.9, c("female", "male") , lty=1, col=c('green', 'red'), bty='n', cex=.75) close_project(db)
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="<unknown>")) add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) f=fit_enorm(db) pv_M=plausible_values(db,f,(mode=="Do")&(gender=="Male")) pv_F=plausible_values(db,f,(mode=="Do")&(gender=="Female")) par(mfrow=c(1,2)) plot(ecdf(pv_M$PV1), main="Do: males versus females", xlab="Ability", col="red") lines(ecdf(pv_F$PV1), col="green") legend(-2.2,0.9, c("female", "male") , lty=1, col=c('green', 'red'), bty='n', cex=.75) pv_M=plausible_values(db,f,(mode=="Want")&(gender=="Male")) pv_F=plausible_values(db,f,(mode=="Want")&(gender=="Female")) plot(ecdf(pv_M$PV1), main="Want: males versus females", xlab=" Ability", col="red") lines(ecdf(pv_F$PV1),col="green") legend(-2.2,0.9, c("female", "male") , lty=1, col=c('green', 'red'), bty='n', cex=.75) close_project(db)
plot method for pairwise DIF statistics
## S3 method for class 'DIF_stats' plot(x, items = NULL, itemsX = items, itemsY = items, alpha = 0.05, ...)
## S3 method for class 'DIF_stats' plot(x, items = NULL, itemsX = items, itemsY = items, alpha = 0.05, ...)
x |
object produced by |
items |
character vector of item id's for a subset of the plot. Useful if you have many items. If NULL all items are plotted. |
itemsX |
character vector of item id's for the X axis |
itemsY |
character vector of item id's for the Y axis |
alpha |
significance level used to color the plot (two sided) |
... |
further arguments to plot |
Plotting produces an image of the matrix of pairwise DIF statistics. The statistics are standard normal deviates and colored to distinguish significant from non-significant values. If there is no DIF, a proportion alpha off the cells will be colored significant by chance alone.
Feskens, R., Fox, J. P., & Zwitser, R. (2019). Differential item functioning in PISA due to mode effects. In Theoretical and Practical Advances in Computer-based Educational Measurement (pp. 231-247). Springer, Cham.
Plot equating information from probability_to_pass
## S3 method for class 'p2pass' plot( x, what = c("all", "equating", "sens/spec", "roc"), booklet_id = NULL, ... )
## S3 method for class 'p2pass' plot( x, what = c("all", "equating", "sens/spec", "roc"), booklet_id = NULL, ... )
x |
An object produced by function |
what |
information to plot, 'equating', 'sens/spec', 'roc, or 'all' |
booklet_id |
vector of booklet_id's to plot, if NULL all booklets are plotted |
... |
Any additional plotting parameters; e.g., cex = 0.7. |
The plot shows 'fit' by comparing the expected score based on the model (grey line) with the average scores based on the data (black line with dots) for groups of students with similar estimated ability.
## S3 method for class 'prms' plot( x, item_id = NULL, dataSrc = NULL, predicate = NULL, nbins = 5, ci = 0.95, add = FALSE, col = "black", col.model = "grey80", ... )
## S3 method for class 'prms' plot( x, item_id = NULL, dataSrc = NULL, predicate = NULL, nbins = 5, ci = 0.95, add = FALSE, col = "black", col.model = "grey80", ... )
x |
object produced by fit_enorm |
item_id |
which item to plot, if NULL, one plot for each item is made |
dataSrc |
data source, see details |
predicate |
an expression to subset data in dataSrc |
nbins |
number of ability groups |
ci |
confidence interval for the error bars, between 0 and 1. Use 0 to suppress the error bars. Default = 0.95 for a 95% confidence interval |
add |
logical; if TRUE add to an already existing plot |
col |
color for the observed score average |
col.model |
color for the expected score based on the model |
... |
further arguments to plot |
The standard plot shows the fit against the sample on which the parameters were fitted. If dataSrc is provided, the fit is shown against the observed data in dataSrc. This may be useful for plotting the fit in different subgroups as a visual test for item level DIF. The confidence intervals denote the uncertainty about the predicted pvalues within the ability groups for the sample size in dataSrc (if not NULL) or the original data on which the model was fit.
Silently, a data.frame with observed and expected values possibly useful to create a numerical fit measure.
Plot the item-total regressions fit by the interaction (or Rasch) model
## S3 method for class 'rim' plot( x, items = NULL, summate = TRUE, overlay = FALSE, curtains = 10, show.observed = TRUE, ... )
## S3 method for class 'rim' plot( x, items = NULL, summate = TRUE, overlay = FALSE, curtains = 10, show.observed = TRUE, ... )
x |
An object produced by function |
items |
The items to plot (item_id's). If NULL, all items will be plotted |
summate |
If FALSE, regressions for polytomous items will be shown for each response option separately; default is TRUE. |
overlay |
If TRUE and more than one item is specified, there will be two plots, one for the Rasch model and the other for the interaction model, with all items overlayed; otherwise, one plot for each item with the two models overlayed. Ignored if summate is FALSE. Default is FALSE |
curtains |
100*the tail probability of the sum scores to be shaded. Default is 10. Set to 0 to have no curtains shown at all. |
show.observed |
If TRUE, the observed proportion correct at each sum score will be shown as dots. Default is FALSE. |
... |
Any additional plotting parameters. |
Customization of title and subtitle can be done by using the arguments main and sub. These arguments can contain references to the variables item_id (if overlay=FALSE) or model (if overlay=TRUE) by prefixing them with a dollar sign, e.g. plot(m, main='item: $item_id')
Given response data that form a connected design, compute the probability to pass on the reference set conditional on each score on one or more target tests.
probability_to_pass( dataSrc, parms, ref_items, pass_fail, predicate = NULL, target_booklets = NULL, nDraws = 1000 )
probability_to_pass( dataSrc, parms, ref_items, pass_fail, predicate = NULL, target_booklets = NULL, nDraws = 1000 )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
parms |
object produced by |
ref_items |
vector with id's of items in the reference set, they must all occur in dataSrc |
pass_fail |
pass-fail score on the reference set, the lowest score with which one passes |
predicate |
An optional expression to subset data in dataSrc, if NULL all data is used |
target_booklets |
The target test booklet(s). A data.frame with columns booklet_id (if multiple booklets) and item_id, if NULL (default) this will be derived from the dataSrc and the probability to pass will be computed for each test score for each booklet in your data. |
nDraws |
The function uses an Markov-Chain Monte-Carlo method to calculate the probability to pass and this is the number of Monte-Carlo samples used. |
Note that this function is computationally intensive and can take some time to run, especially when computing the probability to pass for multiple target booklets. Further technical details can be found in a vignette.
An object of type p2pass
. Use coef()
to extract the
probablity to pass for each booklet and score. Use plot()
to plot
the probabilities, sensitivity and specificity or a ROC-curve.
The function used to plot the results: plot.p2pass
Profile plot
profile_plot( dataSrc, item_property, covariate, predicate = NULL, model = c("IM", "RM"), x = NULL, col = NULL, col.diagonal = "lightgray", ... )
profile_plot( dataSrc, item_property, covariate, predicate = NULL, model = c("IM", "RM"), x = NULL, col = NULL, col.diagonal = "lightgray", ... )
dataSrc |
a connection to a dexter database or a data.frame with columns: person_id, item_id, item_score and the item_property and the covariate of interest. |
item_property |
The name of the item property defining the domains. The item property should have exactly two distinct values in your data |
covariate |
name of the person property used to create the groups. There will be one line for each distinct value. |
predicate |
An optional expression to filter data, if NULL all data is used |
model |
"IM" (default) or "RM" where "IM" is the interaction model and "RM" the Rasch model. The interaction model is the default as it fits the data better or at least as good as the Rasch model. |
x |
Which category of the item_property to draw on the x axis, if NULL, one is chosen automatically |
col |
vector of colors to use for plotting |
col.diagonal |
color of the diagonal lines representing the testscores |
... |
further graphical arguments to plot. Graphical parameters for the legend can be postfixed with |
Profile plots can be used to investigate whether two (or more) groups of respondents attain the same test score in the same way. The user must provide a (meaningful) classification of the items in two non-overlapping subsets such that the test score is the sum of the scores on the subsets. The plot shows the probabilities to obtain any combinations of subset scores with thin gray lines indicating the combinations that give the same test score. The thick lines connect the most likely combination for each test score in each group. When applied to educational test data, the plots can be used to detect differences in the relative difficulty of (sets of) items for respondents that belong to different groups and are matched on the test score. This provides a content-driven way to investigate differential item functioning.
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="unknown")) add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) profile_plot(db, item_property='mode', covariate='gender') close_project(db)
db = start_new_project(verbAggrRules, ":memory:", person_properties=list(gender="unknown")) add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) profile_plot(db, item_property='mode', covariate='gender') close_project(db)
Expected and observed domain scores, conditional on the test score, per person or test score. Domains are specified as categories of items using item_properties.
profile_tables(parms, domains, item_property, design = NULL) profiles( dataSrc, parms, item_property, predicate = NULL, merge_within_persons = FALSE )
profile_tables(parms, domains, item_property, design = NULL) profiles( dataSrc, parms, item_property, predicate = NULL, merge_within_persons = FALSE )
parms |
An object returned by |
domains |
data.frame with column item_id and a column with name equal to |
item_property |
the name of the item property used to define the domains. If |
design |
data.frame with columns item_id and optionally booklet_id |
dataSrc |
a connection to a dexter database or a data.frame with columns: person_id, item_id, item_score, an arbitrarily named column containing an item property and optionally booklet_id |
predicate |
An optional expression to subset data in dataSrc, if NULL all data is used |
merge_within_persons |
whether to merge different booklets administered to the same person. |
When using a unidimensional IRT Model like the extended nominal response model in
dexter (see: fit_enorm
), the model is as a rule to simple to catch all the relevant dimensions in a test.
Nevertheless, a simple model is quite useful in practice. Profile analysis can complement the model
in this case by indicating how a test-taker, conditional on her/his test score,
performs on a number of pre-specified domains, e.g. in case of a mathematics test
the domains could be numbers, algebra and geometry or in case of a digital test the domains could be animated versus
non-animated items. This can be done by comparing the achieved score on a domain with the expected score, given the test score.
a data.frame with columns person_id, booklet_id, booklet_score, <item_property>, domain_score, expected_domain_score
a data.frame with columns booklet_id, booklet_score, <item_property>, expected_domain_score
Verhelst, N. D. (2012). Profile analysis: a closer look at the PISA 2000 reading data. Scandinavian Journal of Educational Research, 56 (3), 315-332.
Simulate item scores conditional on test scores using the interaction model
r_score_IM(m, scores)
r_score_IM(m, scores)
m |
an object produced by function |
scores |
vector of test scores |
a matrix with item scores, one column per item and one row per test score. Row order equal to scores
A data set with rated data. A number of student performances are rated twice on several aspects by independent judges. The ratings are binary and have been summed following the theory discussed by Maris and Bechger (2006, Handbook of Statistics). Data are a small subset of data collected on the State Exam Dutch as a second language for Speaking.
A data set with 75 rows and 15 columns.
A data set of item properties related to the rated data. These are the aspects: IH = content, WZ = word choice and phrasing, and WK = vocabulary.
A data set with 14 rows and 2 columns: item_id and aspect
A set of (trivial) scoring rules for the rated data set
A data set with 42 rows and 3 columns (item_id, response, item_score).
Read item parameters from oplm PAR or CML files
read_oplm_par(par_path)
read_oplm_par(par_path)
par_path |
path to a file in the (binary) OPLM PAR format or the human readable CML format |
It is very occasionally useful to calibrate new items on an existing scale. This function offers the possibility to read parameters from the proprietary oplm format so that they can be used to fix a new calibration in Dexter on an existing scale of items that were calibrated in oplm.
depends on the input. For .PAR files a data.frame with columns: item_id, item_score, beta, nbr, for .CML files also several statistics columns that are outputted by OPLM as part of the calibration.
## Not run: \donttest{ par = read_oplm_par('/parameters.PAR') f = fit_enorm(db, fixed_params=par) } ## End(Not run)
## Not run: \donttest{ par = read_oplm_par('/parameters.PAR') f = fit_enorm(db, fixed_params=par) } ## End(Not run)
Set performance standards on one or more test forms using the data driven direct consensus (3DC) method
standards_3dc(parms, design) ## S3 method for class 'sts_par' coef(object, ...) ## S3 method for class 'sts_par' plot(x, booklet_id = NULL, ...)
standards_3dc(parms, design) ## S3 method for class 'sts_par' coef(object, ...) ## S3 method for class 'sts_par' plot(x, booklet_id = NULL, ...)
parms |
parameters object returned from fit_enorm |
design |
a data.frame with columns 'cluster_id', 'item_id' and optionally 'booklet_id' |
object |
an object containing parameters for the 3DC standard setting procedure |
... |
ignored Optionally you can include a column 'booklet_id' to specify multiple test forms for standard setting and/or columns 'cluster_nbr' and 'item_nbr' to specify ordering of clusters and items in the forms and application. |
x |
an object containing parameters for the 3DC standard setting procedure |
booklet_id |
which test form to plot |
The data driven direct consensus (3DC) method of standard setting was invented by Gunter Maris and described in Keuning et. al. (2017). To easily apply this procedure, we advise to use the free digital 3DC application. This application can be downloaded from the Cito website, see the 3DC application download page. If you want to apply the 3DC method using paper forms instead, you can use the plot method to generate the forms from the sts_par object.
Although the 3DC method is used as explained in Keuning et. al., the method we use for computing the forms is a simple maximum likelihood scaling from an IRT model, described in Moe and Verhelst (2017)
an object of type 'sts_par'
Keuning J., Straat J.H., Feskens R.C.W. (2017) The Data-Driven Direct Consensus (3DC) Procedure: A New Approach to Standard Setting. In: Blomeke S., Gustafsson JE. (eds) Standard Setting in Education. Methodology of Educational Measurement and Assessment. Springer, Cham
Moe E., Verhelst N. (2017) Setting Standards for Multistage Tests of Norwegian for Adult Immigrants In: Blomeke S., Gustafsson JE. (eds) Standard Setting in Education. Methodology of Educational Measurement and Assessment. Springer, Cham
how to make a database for the 3DC standard setting application: standards_db
library(dplyr) db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) design = get_items(db) |> rename(cluster_id='behavior') f = fit_enorm(db) sts_par = standards_3dc(f, design) plot(sts_par) # db_sts = standards_db(sts_par,'test.db',c('mildly aggressive','dangerously aggressive'))
library(dplyr) db = start_new_project(verbAggrRules, ":memory:") add_booklet(db, verbAggrData, "agg") add_item_properties(db, verbAggrProperties) design = get_items(db) |> rename(cluster_id='behavior') f = fit_enorm(db) sts_par = standards_3dc(f, design) plot(sts_par) # db_sts = standards_db(sts_par,'test.db',c('mildly aggressive','dangerously aggressive'))
This function creates an export (an sqlite database file) which can be used by the 3DC application. This is a free application with which a standard setting session can be facilitated through a LAN network using the Chrome browser. The 3DC application can be downloaded from 3DC application download page
standards_db( par.sts, file_name, standards, population = NULL, group_leader = "admin" )
standards_db( par.sts, file_name, standards, population = NULL, group_leader = "admin" )
par.sts |
an object containing parameters for the 3DC standard setting procedure produced by
|
file_name |
name of the exported database file |
standards |
vector of 1 or more standards. In case there are multiple test forms and they should use different performance standards, a list of such vectors. The names of this list should correspond to the names of the testforms |
population |
optional, a data.frame with three columns: 'booklet_id','booklet_score','n' (where n is a count) |
group_leader |
login name of the group leader. The login password will always be 'admin' but can be changed in the 3DC application |
Imports a complete set of scoring rules and starts a new project (database)
start_new_project(rules, db_name = "dexter.db", person_properties = NULL)
start_new_project(rules, db_name = "dexter.db", person_properties = NULL)
rules |
A data frame with columns |
db_name |
A string specifying a filename
for a new sqlite database to be created. If this name does not
contain a path, the file will be created in the work
directory. Any existing file with the same name will be overwritten. For an in-memory database
you can use the string |
person_properties |
An optional list of person properties. Names should correspond to person_properties intended to be used in the project.
Values are used as default (missing) values. The datatype will also be inferred from the values.
Known person_properties will be automatically imported when adding response data with |
This package only works with closed items (e.g. likert, MC or possibly short answer) it does not score any open items. The first step to creating a project is to import an exhaustive list of all items and all admissible responses, along with the score that any of the latter will be given. Responses may be integers or strings but they will always be treated as strings. Scores must be integers, and the minimum score for an item must be 0. When inputting data, all responses not specified in the rules can optionally be treated as missing and ultimately scored 0, but it is good style to include the missing responses in the list. NA values will be treated as the string "NA"'.
a database connection object.
head(verbAggrRules) db_name = tempfile(fileext='.db') db = start_new_project(verbAggrRules, db_name, person_properties = list(gender = "unknown"))
head(verbAggrRules) db_name = tempfile(fileext='.db') db = start_new_project(verbAggrRules, db_name, person_properties = list(gender = "unknown"))
Creates a dexter project database and fills it with response data based on a .dat and .scr file
start_new_project_from_oplm( dbname, scr_path, dat_path, booklet_position = NULL, responses_start = NULL, response_length = 1, person_id = NULL, missing_character = c(" ", "9"), use_discrim = FALSE, format = "compressed" )
start_new_project_from_oplm( dbname, scr_path, dat_path, booklet_position = NULL, responses_start = NULL, response_length = 1, person_id = NULL, missing_character = c(" ", "9"), use_discrim = FALSE, format = "compressed" )
dbname |
filename/path of new dexter project database (will be overwritten if already exists) |
scr_path |
path to the .scr file |
dat_path |
path to the .dat file |
booklet_position |
vector of start and end of booklet position in the dat file, e.g. c(1,4), all positions are counted from 1, start and end are both inclusive. If NULL, this is read from the scr file. |
responses_start |
start position of responses in the .dat file. If NULL, this is read from the scr file. |
response_length |
length of individual responses, default=1 |
person_id |
optionally, a vector of start and end position of person_id in the .dat file. If NULL, person id's will be auto-generated. |
missing_character |
vector of character(s) used to indicate missing in .dat file, default is to use both a space and a 9 as missing characters. |
use_discrim |
if TRUE, the scores for the responses will be multiplied by the discrimination parameters of the items |
format |
not used, at the moment only the compressed format is supported. |
start_new_project_from_oplm builds a complete dexter database from a .dat and .scr file in the proprietary oplm format. Four custom variables are added to the database: booklet_on_off, oplm_marginal, item_local_on_off, item_global_on_off. These are taken from the .scr file and can be used in predicates in the various dexter functions.
Booklet_position and responses_start are usually inferred from the scr file but since they are sometimes misspecified in the scr file they can be overridden. Response_length is not inferred from the scr file since anything other than 1 is most often a mistake.
a database connection object.
## Not run: \donttest{ db = start_new_project_from_oplm('test.db', 'path_to_scr_file', 'path_to_dat_file', booklet_position=c(1,3), responses_start=101, person_id=c(50,62)) prms = fit_enorm(db, item_global_on_off==1 & item_local_on_off==1 & booklet_on_off==1) } ## End(Not run)
## Not run: \donttest{ db = start_new_project_from_oplm('test.db', 'path_to_scr_file', 'path_to_dat_file', booklet_position=c(1,3), responses_start=101, person_id=c(50,62)) prms = fit_enorm(db, item_global_on_off==1 & item_local_on_off==1 & booklet_on_off==1) } ## End(Not run)
Show simple Classical Test Analysis statistics at item and test level
tia_tables( dataSrc, predicate = NULL, type = c("raw", "averaged", "compared"), max_scores = c("observed", "theoretical"), distractor = FALSE )
tia_tables( dataSrc, predicate = NULL, type = c("raw", "averaged", "compared"), max_scores = c("observed", "theoretical"), distractor = FALSE )
dataSrc |
a connection to a dexter database, a matrix, or a data.frame with columns: person_id, item_id, item_score |
predicate |
An optional expression to subset data, if NULL all data is used |
type |
How to present the item level statistics: |
max_scores |
use the observed maximum item score or the theoretical maximum item score according to the scoring rules in the database to determine pvalues and maximum scores |
distractor |
add a tia for distractors, only useful for selected response (MC) items |
A list containing:
booklets |
a data.frame of statistics at booklet level |
items |
a data.frame (or list if type='compared') of statistics at item level |
distractors |
a data.frame of statistics at the response level (if distractor==TRUE), i.e. rvalue (pvalue for response) and rar (rest-alternative correlation) |
It is occasionally necessary to alter or add a scoring rule, e.g. in case of a key error. This function offers the possibility to do so and also allows you to add new items to your project
touch_rules(db, rules)
touch_rules(db, rules)
db |
a connection to a dexter project database |
rules |
A data frame with columns |
The rules should contain all rules that you want to change or add. This means that in case of a key error in a single multiple choice question, you typically have to change two rules.
If the scoring rules pass a sanity check, a small summary of changes is printed and nothing is returned. Otherwise this function returns a data frame listing the problems found, with 4 columns:
id of the problematic item
if TRUE, the item has only one distinct score
if TRUE, the item contains two or more identical response categories
if TRUE, the minimum score of the item was not 0
## Not run: \donttest{ # given that in your dexter project there is an mc item with id 'itm_01', # which currently has key 'A' but you want to change it to 'C'. new_rules = data.frame(item_id='itm_01', response=c('A','C'), item_score=c(0,1)) touch_rules(db, new_rules) } ## End(Not run)
## Not run: \donttest{ # given that in your dexter project there is an mc item with id 'itm_01', # which currently has key 'A' but you want to change it to 'C'. new_rules = data.frame(item_id='itm_01', response=c('A','C'), item_score=c(0,1)) touch_rules(db, new_rules) } ## End(Not run)
A data set of self-reported verbal behaviour in different frustrating situations (Vansteelandt, 2000). The dataset also contains participants reported gender and scores on the 'anger' questionnaire.
A data set with 316 rows and 26 columns.
A data set of item properties related to the verbal aggression data
A data set with 24 rows and 5 columns.
A set of (trivial) scoring rules for the verbal aggression data set
A data set with 72 rows and 3 columns (item_id, response, item_score).