Combining regularization and logistic regression model to validate the Q‐matrix for cognitive diagnosis model

Abstract

Q-matrix is an important component of most cognitive diagnosis models (CDMs); however, it mainly relies on subject matter experts' judgements in empirical studies, which introduces the possibility of misspecified q-entries. To address this, statistical Q-matrix validation methods have been proposed to aid experts' judgement. A few of these methods, including the multiple logistic regression-based (MLR-B) method and the Hull method, can be applied to general CDMs, but they are either time-consuming or lack accuracy under certain conditions. In this study, we combine the L 1 regularization and MLR model to validate the Q-matrix. Specifically, an L 1 penalty term is imposed on the log-likelihood of the MLR model to select the necessary attributes for each item. A simulation study with various factors was conducted to examine the performance of the new method against the two existing methods. The results show that the regularized MLR-B method (a) produces the highest Q-matrix recovery rate (QRR) and true positive rate (TPR) for most conditions, especially with a small sample size; (b) yields a slightly higher true negative rate (TNR) than either the MLR-B or the Hull method for most conditions; and (c) requires less computation time than the MLR-B method and similar computation time as the Hull method. A real data set is analysed for illustration purposes.

Three new corrections for standardized person‐fit statistics for tests with polytomous items

Abstract

Recent years have seen a growing interest in the development of person-fit statistics for tests with polytomous items. Some of the most popular person-fit statistics for such tests belong to the class of standardized person-fit statistics, T$$ T $$, that is assumed to have a standard normal null distribution. However, this distribution only holds when (a) the true ability parameter is known and (b) an infinite number of items are available. In practice, both conditions are violated, and the quality of person-fit results is expected to deteriorate. In this paper, we propose three new corrections for T$$ T $$ that simultaneously account for the use of an estimated ability parameter and the use of a finite number of items. The three new corrections are direct extensions of those that were developed by Gorney et al. (Psychometrika, 2024, https://doi.org/10.1007/s11336-024-09960-x) for tests with only dichotomous items. Our simulation study reveals that the three new corrections tend to outperform not only the original statistic T$$ T $$ but also an existing correction for T$$ T $$ proposed by Sinharay (Psychometrika, 2016, 81, 992). Therefore, the new corrections appear to be promising tools for assessing person fit in tests with polytomous items.

Modelling motion energy in psychotherapy: A dynamical systems approach

Abstract

In this study we introduce an innovative mathematical and statistical framework for the analysis of motion energy dynamics in psychotherapy sessions. Our method combines motion energy dynamics with coupled linear ordinary differential equations and a measurement error model, contributing new clinical parameters to enhance psychotherapy research. Our approach transforms raw motion energy data into an interpretable account of therapist–patient interactions, providing novel insights into the dynamics of these interactions. A key aspect of our framework is the development of a new measure of synchrony between the motion energies of therapists and patients, which holds significant clinical and theoretical value in psychotherapy. The practical applicability and effectiveness of our modelling and estimation framework are demonstrated through the analysis of real session data. This work advances the quantitative analysis of motion dynamics in psychotherapy, offering important implications for future research and therapeutic practice.

Assessing quality of selection procedures: Lower bound of false positive rate as a function of inter‐rater reliability

Abstract

Inter-rater reliability (IRR) is one of the commonly used tools for assessing the quality of ratings from multiple raters. However, applicant selection procedures based on ratings from multiple raters usually result in a binary outcome; the applicant is either selected or not. This final outcome is not considered in IRR, which instead focuses on the ratings of the individual subjects or objects. We outline the connection between the ratings' measurement model (used for IRR) and a binary classification framework. We develop a simple way of approximating the probability of correctly selecting the best applicants which allows us to compute error probabilities of the selection procedure (i.e., false positive and false negative rate) or their lower bounds. We draw connections between the IRR and the binary classification metrics, showing that binary classification metrics depend solely on the IRR coefficient and proportion of selected applicants. We assess the performance of the approximation in a simulation study and apply it in an example comparing the reliability of multiple grant peer review selection procedures. We also discuss other possible uses of the explored connections in other contexts, such as educational testing, psychological assessment, and health-related measurement, and implement the computations in the R package IRR2FPR.

Using cross‐validation methods to select time series models: Promises and pitfalls

Abstract

Vector autoregressive (VAR) modelling is widely employed in psychology for time series analyses of dynamic processes. However, the typically short time series in psychological studies can lead to overfitting of VAR models, impairing their predictive ability on unseen samples. Cross-validation (CV) methods are commonly recommended for assessing the predictive ability of statistical models. However, it is unclear how the performance of CV is affected by characteristics of time series data and the fitted models. In this simulation study, we examine the ability of two CV methods, namely,10-fold CV and blocked CV, in estimating the prediction errors of three time series models with increasing complexity (person-mean, AR, and VAR), and evaluate how their performance is affected by data characteristics. We then compare these CV methods to the traditional methods using the Akaike (AIC) and Bayesian (BIC) information criteria in their accuracy of selecting the most predictive models. We find that CV methods tend to underestimate prediction errors of simpler models, but overestimate prediction errors of VAR models, particularly when the number of observations is small. Nonetheless, CV methods, especially blocked CV, generally outperform the AIC and BIC. We conclude our study with a discussion on the implications of the findings and provide helpful guidelines for practice.

Fast estimation of generalized linear latent variable models for performance and process data with ordinal, continuous, and count observed variables

Different data types often occur in psychological and educational measurement such as computer-based assessments that record performance and process data (e.g., response times and the number of actions). Modelling such data requires specific models for each data type and accommodating complex dependencies between multiple variables. Generalized linear latent variable models are suitable for modelling mixed data simultaneously, but estimation can be computationally demanding. A fast solution is to use Laplace approximations, but existing implementations of joint modelling of mixed data types are limited to ordinal and continuous data. To address this limitation, we derive an efficient estimation method that uses first- or second-order Laplace approximations to simultaneously model ordinal data, continuous data, and count data. We illustrate the approach with an example and conduct simulations to evaluate the performance of the method in terms of estimation efficiency, convergence, and parameter recovery. The results suggest that the second-order Laplace approximation achieves a higher convergence rate and produces accurate yet fast parameter estimates compared to the first-order Laplace approximation, while the time cost increases with higher model complexity. Additionally, models that consider the dependence of variables from the same stimulus fit the empirical data substantially better than models that disregarded the dependence.

Identifiability and estimability of Bayesian linear and nonlinear crossed random effects models

Abstract

Crossed random effects models (CREMs) are particularly useful in longitudinal data applications because they allow researchers to account for the impact of dynamic group membership on individual outcomes. However, no research has determined what data conditions need to be met to sufficiently identify these models, especially the group effects, in a longitudinal context. This is a significant gap in the current literature as future applications to real data may need to consider these conditions to yield accurate and precise model parameter estimates, specifically for the group effects on individual outcomes. Furthermore, there are no existing CREMs that can model intrinsically nonlinear growth. The goals of this study are to develop a Bayesian piecewise CREM to model intrinsically nonlinear growth and evaluate what data conditions are necessary to empirically identify both intrinsically linear and nonlinear longitudinal CREMs. This study includes an applied example that utilizes the piecewise CREM with real data and three simulation studies to assess the data conditions necessary to estimate linear, quadratic, and piecewise CREMs. Results show that the number of repeated measurements collected on groups impacts the ability to recover the group effects. Additionally, functional form complexity impacted data collection requirements for estimating longitudinal CREMs.

Statistical inference for agreement between multiple raters on a binary scale

Agreement studies often involve more than two raters or repeated measurements. In the presence of two raters, the proportion of agreement and of positive agreement are simple and popular agreement measures for binary scales. These measures were generalized to agreement studies involving more than two raters with statistical inference procedures proposed on an empirical basis. We present two alternatives. The first is a Wald confidence interval using standard errors obtained by the delta method. The second involves Bayesian statistical inference not requiring any specific Bayesian software. These new procedures show better statistical behaviour than the confidence intervals initially proposed. In addition, we provide analytical formulas to determine the minimum number of persons needed for a given number of raters when planning an agreement study. All methods are implemented in the R package simpleagree and the Shiny app simpleagree.

Correcting for measurement error under meta‐analysis of z‐transformed correlations

Abstract

This study mainly concerns correction for measurement error using the meta-analysis of Fisher's z-transformed correlations. The disattenuation formula of Spearman (American Journal of Psychology, 15, 1904, 72) is used to correct for individual raw correlations in primary studies. The corrected raw correlations are then used to obtain the corrected z-transformed correlations. What remains little studied, however, is how to best correct for within-study sampling error variances of corrected z-transformed correlations. We focused on three within-study sampling error variance estimators corrected for measurement error that were proposed in earlier studies and is proposed in the current study: (1) the formula given by Hedges (Test validity, Lawrence Erlbaum, 1988) assuming a linear relationship between corrected and uncorrected z-transformed correlations (linear correction), (2) one derived by the first-order delta method based on the average of corrected z-transformed correlations (stabilized first-order correction), and (3) one derived by the second-order delta method based on the average of corrected z-transformed correlations (stabilized second-order correction). Via a simulation study, we compared performance of these estimators and the sampling error variance estimator uncorrected for measurement error in terms of estimation and inference accuracy of the mean correlation as well as the homogeneity test of effect sizes. In obtaining the corrected z-transformed correlations and within-study sampling error variances, coefficient alpha was used as a common reliability coefficient estimate. The results showed that in terms of the estimated mean correlation, sampling error variances with linear correction, the stabilized first-order and second-order corrections, and no correction performed similarly in general. Furthermore, in terms of the homogeneity test, given a relatively large average sample size and normal true scores, the stabilized first-order and second-order corrections had type I error rates that were generally controlled as well as or better than the other estimators. Overall, stabilized first-order and second-order corrections are recommended when true scores are normal, reliabilities are acceptable, the number of items per psychological scale is relatively large, and the average sample size is relatively large.

Mixtures of t$$ t $$ factor analysers with censored responses and external covariates: An application to educational data from Peru

Abstract

Analysing data from educational tests allows governments to make decisions for improving the quality of life of individuals in a society. One of the key responsibilities of statisticians is to develop models that provide decision-makers with pertinent information about the latent process that educational tests seek to represent. Mixtures of t$$ t $$ factor analysers (MtFA) have emerged as a powerful device for model-based clustering and classification of high-dimensional data containing one or several groups of observations with fatter tails or anomalous outliers. This paper considers an extension of MtFA for robust clustering of censored data, referred to as the MtFAC model, by incorporating external covariates. The enhanced flexibility of including covariates in MtFAC enables cluster-specific multivariate regression analysis of dependent variables with censored responses arising from upper and/or lower detection limits of experimental equipment. An alternating expectation conditional maximization (AECM) algorithm is developed for maximum likelihood estimation of the proposed model. Two simulation experiments are conducted to examine the effectiveness of the techniques presented. Furthermore, the proposed methodology is applied to Peruvian data from the 2007 Early Grade Reading Assessment, and the results obtained from the analysis provide new insights regarding the reading skills of Peruvian students.