Explore chapters and articles related to this topic
Assessment of Vision-Related Quality of Life
Published in Ching-Yu Cheng, Tien Yin Wong, Ophthalmic Epidemiology, 2022
Eva K. Fenwick, Preeti Gupta, Ryan E. K. Man
PROMs, developed using CTT methods are called first-generation instruments. Second-generation vision-related PROMs are those developed or subsequently validated using modern psychometric theory, particularly Rasch analysis. Rasch analysis is a form of Item Response Theory and a technique by which measurement scales to measure patient-reported outcomes can be built and/or evaluated.100 Rasch analysis overcomes the limitations associated with simple ordinal ‘summary’ scoring methods used in first-generation instruments,101 and provides a means to convert ordinal scores to estimates of interval measures. Additionally, Rasch analysis provides comprehensive psychometric evaluation not available in CTT;102 for example, how well each item in the questionnaire assesses the latent construct being measured (e.g., activity limitation); how well the items are able to discriminate between different strata of respondents; how well the items ‘target’ the respondents’ level of the construct; and whether the response options within the scale are working as intended (i.e., being logically selected by participants).103
A Synthetic Overview
Published in Trevor G. Bond, Zi Yan, Moritz Heene, Applying the Rasch Model, 2020
Trevor G. Bond, Zi Yan, Moritz Heene
A hypothesis that the latent trait is actually quantitative: We are challenged at every turn concerning the ‘assumptions of the Rasch model’, but almost all of us in the human sciences, right across TST, IRT, SEM, and G theory, remain completely blind to our far more fundamental assumption—that the latent traits we investigate are actually quantitative. As an absolute minimum requirement, we should report our results in terms of that hypothesis by detailing the extent and nature of our data’s fulfillment of the requirements of that hypothesis.
Reliability II: Advanced methods
Published in Claudio Violato, Assessing Competence in Medicine and Other Health Professions, 2018
IRT, also known as latent trait theory, is based on the relationship between performances on a test item and the ability or the trait that item was designed to measure. The items aim to measure the ability (or trait) that underlies the performance.
Imputation of missing values within WHODAS 2.0 data collected from low back pain patients using the response function approach
Published in Disability and Rehabilitation, 2023
Duygu Siddikoglu, Beyza Doganay Erdogan, Derya Gokmen, Sehim Kutlay
The RF imputation method was first proposed by Sijtsma and Van der Ark for data related to tests or scales [14]. In the one-parameter form of the IRT model, for a respondent with a latent trait level, the probability of having a score x on item j is called the item response function, shown as, P (Xj = x | θ). The RF imputation uses the estimated item response function to impute item scores, and it has been proven to be an efficient imputation method for unidimensional scales in simulation studies [15–17]. Incomplete datasets with missing data (proportions 10%, 30%, 50%, 80% and in the MAR mechanism respectively) were imputed. The imputation was carried out in three steps. The missing data were filled 10 times generating 10 complete unique data sets. Each data set was analyzed separately and then, the results from each analysis were combined to produce an overall mean and standard deviation for each missing value. The missing values were predicted based on RF that were used as predictors of the missing values.
Using item response theory to appraise key feature examinations for clinical reasoning
Published in Medical Teacher, 2022
Simon Zegota, Tim Becker, York Hagmayer, Tobias Raupach
Afterwards the same data were analysed using an IRT approach. First, the aforementioned assumptions were tested. In order to test whether a one-dimensional construct was underlying the exam, we used modified parallel analysis (MPA) and Confirmatory Factor Analysis (CFA). Modified parallel analysis (Drasgow and Lissak 1983), which is based on parallel analysis (Horn 1965), assesses the similarity of an empirical dataset with a dataset that is generated under the assumption of unidimensionality. CFA was performed to assess the fit of a model with the number of factors suggested by the MPA. The assumption of local independence was assessed using Yen’s Q3 (Yen 1984). Critical values for our sample size and number of variables were adopted from Christensen et al. (Christensen et al. 2017). After the assumptions of IRT were tested, a 1PLM, 2PLM and 3PLM were fitted to the data. To determine the simplest and best-fitting model, a comparison was made using a likelihood ratio test. Item response theory analyses were presented as item difficulty and item discrimination as well as item characteristics curves, item information curves and a test information function.
Why Should Assessment Clinicians Care about Factor Analysis?
Published in Journal of Personality Assessment, 2022
Cooke et al. note that their findings stand in contrast with many published studies reporting a multi-dimensional structure for various measures of psychopathy, including the revised Hare Psychopathy Checklist (PCL-R; Hare, 2003). They account for this discrepancy by citing the use of linear factor analysis in the previous literature, noting further that some problematic assumptions about item linearity in factor analysis are avoided in the S and M Framework for test validation. A technical exposition on the differences between the item-response theory approach used by Cooke et al. and classical test theory approaches has been effectively presented elsewhere (Reise & Henson, 2003). Instead, I will use this occasion to discuss the place of factor analysis in construct validation of personality tests and how the findings of factor analytic studies might be helpful to practitioners of clinical assessment.