Explore chapters and articles related to this topic
Assessment of Vision-Related Quality of Life
Published in Ching-Yu Cheng, Tien Yin Wong, Ophthalmic Epidemiology, 2022
Eva K. Fenwick, Preeti Gupta, Ryan E. K. Man
It is important to critically review the psychometric properties of vision-specific PROMs in order to ensure that the measurement they provide is robust and reliable. Most papers reporting on PROM validation will provide at least some Classical Test Theory (CTT) metrics, such as validity (i.e., the degree to which a PROM measures what it purports to measure), and reliability (i.e., the degree to which the measurement is free from measurement error).
Reliability, Validity, and the Measurement of Change in Serial Assessments of Athletes
Published in Mark R. Lovell, Ruben J. Echemendia, Jeffrey T. Barth, Michael W. Collins, Traumatic Brain Injury in Sports, 2020
Michael D. Franzen, Robert J. Frerichs, Grant L. Iverson
As previously noted, the regression methods described above are not appropriate when the assumptions of multiple regression are violated. The relation between the pretest and posttest scores should be linear and homoscedastic and the predictor(s) should be measured without error (Pedhazur, 1982). The assumption of classical test theory regarding the fallibility of measurement is inconsistent with the assumption of regression analysis. McSweeney et al. (1993) recommended that this method should not be used when the data for change are not normally distributed. As well, measures prone to floor or ceiling effects are not amenable for use with regression methods. Finally, one needs to consider the appropriateness of the regression equation for use with a specific individual. The accuracy of the regression equations may be compromised when applied to individuals whose scores or characteristics are outside of the range of the reference sample from which the equation was derived. It is not clear how robust regression methods are to violations of these assumptions.
Reliability II: Advanced methods
Published in Claudio Violato, Assessing Competence in Medicine and Other Health Professions, 2018
It is called classical test theory because it was developed first in psychometrics: any observed score is composed of the true score plus error of measurement. Thus, X (Observed score) = T (True score) + e (error of measurement). The early foundational work of scholars like Karl Pearson, Charles Spearman, EL Thorndike, and Fredrick Kuder was based on this idea. This theory and its applications are detailed in Chapter 8.
Using item response theory to appraise key feature examinations for clinical reasoning
Published in Medical Teacher, 2022
Simon Zegota, Tim Becker, York Hagmayer, Tobias Raupach
Classical test theory is the most frequently used measurement model for evaluating assessments in medical education (Downing 2003). Common item-statistics based on this theory include item difficulty (i.e. the number of respondents answering an item correctly) and item-test correlation (i.e. the correlation of performance on the item and the total score across all items), while common overall test-statistics include Cronbach’s alpha (i.e. the mean inter-correlation of all items), overall test-score mean or test-score standard deviation (Hambleton and Jones 2005). These metrics allow researchers and practitioners to quickly evaluate exams and their individual items. In fact, measures of internal consistency are regularly being used to judge the appropriateness of exams in medical education (Wass et al. 2001).
Detecting mental health problems after paediatric acquired brain injury: A pilot Rasch analysis of the strengths and difficulties questionnaire
Published in Neuropsychological Rehabilitation, 2021
Robyn Henrietta McCarron, Fergus Gracey, Andrew Bateman
Traditionally, the psychometric properties of assessment measures in terms of validity and reliability have been investigated using methods based on Classical Test Theory (CTT). The Rasch Measurement Model (Rasch, 1960) is a modern psychometric technique that falls within the parameters of Item Response Theory (IRT) (Hambleton et al., 1991). Unlike CTT the Rasch model has the advantage of not assuming the equivalence between ordinal and interval scales (Hobart & Cano, 2009). Nor does it rely on the assumption that the observed scores are composed of the true score and an error (neither of which can be determined), in order to estimate the reliability of the observed score. Instead, the Rasch model is based on assumptions that readily make sense within a real-world context. It tests the assumptions that people respond in a probabilistic but ordered manner based on both their underlying traits (be it ability or disease severity) and the level or difficulty being assessed by an item/question. It maintains that an assessment measure should not be biased towards individuals with certain characteristics or previous responses, and it argues that for a total score to be meaningful it needs to be reflective of a single unidimensional construct. Rasch analysis has been demonstrated to be an insightful method for examining the psychometric properties of rating scales in different populations, including in people with ABI (Bateman et al., 2009; Simblett et al., 2015).
Identification and Evaluation of Items for Vitreoretinal Diseases Quality of Life Item Banks
Published in Ophthalmic Epidemiology, 2019
Mallika Prem Senthil, Eva K Fenwick, Ecosse Lamoureux, Jyoti Khadka, Konrad Pesudovs
The impact of major blinding retinal diseases (age-related macular degeneration and diabetic retinopathy) on quality of life (QoL) has been extensively studied.1–7 However, the impact of other retinal and vitreoretinal diseases (e.g. hereditary degenerations and dystrophies, vascular occlusions, macular hole, epiretinal membrane, and other vitreoretinopathies) on peoples’ QoL is poorly understood due to lack of appropriate patient reported outcome (PRO) instruments. There are currently 29 PRO instruments available for retinal diseases, 17 of which were developed for other retinal and vitreoretinal diseases.8 Of the 17 PRO instruments, 11 studies relate to hereditary retinal disorders (nine to retinitis pigmentosa, one to congenital stationary night blindness, one to Stargardt’s macular dystrophy), three relate to macular hole, and one relates to cytomegalovirus retinitis.9–23 These PRO instruments were mostly developed using traditional methods of psychometric assessment (i.e. Classical Test Theory). The Classical Test Theory assumes that the value of each item on the questionnaire has the same difficulty level and therefore scores them equally. Also, the ordinal integer response used for each item assumes equal separation and uniform changes between the response categories.24 Both these assumptions damage the ability of the Classical Test Theory scored instruments to measure precisely and accurately.25