Explore chapters and articles related to this topic
Self-esteem scale: Translation and validation in Malaysian adults living with asthma
Published in Elida Zairina, Junaidi Khotib, Chrismawan Ardianto, Syed Azhar Syed Sulaiman, Charles D. Sands, Timothy E. Welty, Unity in Diversity and the Standardisation of Clinical Pharmacy Services, 2017
S. Ahmad, M. Qamar, F.A. Shaikh, N.E. Ismail, A.I. Ismail, M.A.M. Zim
In present study, the validation of items construct was also performed by Rasch model. Rasch model is a mathematical model that follows modern response theory. Rasch model suggests that the probability of endorsing any response category to an item solely depends on the ability of the person ability and the difficulty of the item. This model uses log odd unit or logit scale that is considered as better and more accurate scale for analysing ordinal raw data including Likert scale (Baker, 2001). The output tables for fit statistics were generated by using Bond and Fox software®. Infit/outfit mean square values of each item were used to verify the construct validity of each item. PTMEA Corr values assessed the ability of each item to distinguish different level of abilities of respondents (Linacre 2002). All the items in RSES-M fitted the Rasch model and proved the construct validity of the translated scale that was consistent with the initial pilot study (Ahmad et al. 2016).
Patient-Reported Outcomes: Development and Validation
Published in Demissie Alemayehu, Joseph C. Cappelleri, Birol Emir, Kelly H. Zou, Statistical Topics in Health Economics and Outcomes Research, 2017
Joseph C. Cappelleri, Andrew G. Bushmakin, Jose Ma. J. Alvir
Three assumptions underlie the successful application of the Rasch model (and IRT models in general): unidimensionality, local independence, and correct model specification. The assumption of unidimensionality requires that a scale consists of items that tap into only one dimension. Local independence means that, for a subsample of individuals who have the same level on the attribute, there should be no correlation among the items. Correct model specification, which is not unique to Rasch (or IRT) models, is necessary at both the item level and person level. For details on these assumptions and how to evaluate them, the reader is referred elsewhere (Bond and Fox, 2015; Embretson and Reise, 2000; Fischer and Molenaar, 1995; Hambleton et al., 1991).
A multidimensional Rasch model for multiple system estimation where the number of lists changes over time
Published in Dankmar Böhning, Peter G.M. van der Heijden, John Bunge, Capture-Recapture Methods for the Social and Medical Sciences, 2017
Elvira Pelle, David J. Hessen, Peter G. M. van der Heijden
The Rasch model is a model widely used by psychometricians to explain the characteristics and performances of a test; the basic idea is that the probability of a response of cal individual to an item can be modelled as a function of the difficulty of the item and the latent ability of the individual. Po acapture-recapture context, a correct or incorrect response to an item is replaced by the presence or absence in a list, and heterogeneity among individuals is modelled in terms of constant apparent dependence between lists (see International Working Group for Disease Monitoring and Forecasting IWGDMFa [154]), introducing into the model the first-order heterogeneity parameter H1 (all two-factor interaction terms are supposed to be equal and positive), the second-order hetmrogeneity parameter H2 (all threefactor interaction terms are supposed to be equal and positive), and so on.
Utility of a multimodal computer-based assessment format for assessment with a higher degree of reliability and validity
Published in Medical Teacher, 2023
Johan Renes, Cees P.M. van der Vleuten, Carlos F. Collares
Limitations of this study include the small sample size, the single-center set-up, and the stability of the CBA infrastructure. The small sample size precludes the use of IRT models that estimate parameters for discrimination and pseudo guessing, which may have augmented the disadvantage of MCQs in the Rasch model-based analyses of reliability, measurement error, and validity based on internal structure as given by the infit and outfit values. The use of the Rasch model with samples smaller than 50 might lead to paradoxal results in comparison to larger samples, which limits the generalizability of the findings (Chen et al. 2014). Still, despite the difference in the methods, the results obtained in this study are aligned with results from Sam et al. (2018), in terms of demonstrating lower performance of MCQs.
Feedback to support examiners’ understanding of the standard-setting process and the performance of students: AMEE Guide No. 145
Published in Medical Teacher, 2022
Mohsen Tavakol, Brigitte E. Scammell, Angela P. Wetzel
As previously stated, the correlation between p-values for entire groups of students and the ratings rendered by the standard setters may provide misleading feedback of the process used to calculate that test’s pass mark. Similarly, a small number of borderline students who scored close to the passing mark may produce a biased rating of the standard setters’ true judgment of the borderline students’ performance. Item response theory (IRT) models, e.g. the Rasch model, mitigate the issues attached to selecting the borderline students close to the pass mark. The Rasch model shows us the conditional probability that a student will answer a question correctly based on the student’s ability. Under the Rasch model, Angoff ratings are conceptualised. Interested readers can find further information about the Rasch model in AMEE Guide No. 72 (Tavakol and Dennick 2012).
Examining the validity of the drivers of COVID-19 vaccination acceptance scale using Rasch analysis
Published in Expert Review of Vaccines, 2022
Chia-Wei Fan, Jung-Sheng Chen, Frimpong-Manso Addo, Emma Sethina Adjaottor, Gifty Boakye Amankwaah, Cheng-Fang Yen, Daniel Kwasi Ahorsu, Chung-Ying Lin
There are two fundamental assumptions when applying the Rasch model. First, a person with more ability would always have a higher possibility of passing any test items than someone with less ability. Second, a test item considered harder would be perceived harder by any person than a test item that is considered easier. Based on these assumptions, the Rasch model linearly transforms the raw scores into interval measures, and the enrolled subjects and test items can be scaled along a single linear latent continuum so that the person’s characteristics (in this case, participants’ attitude and consideration for COVID-19 vaccination) and item difficulty can be compared. As a result, Rasch analysis is increasingly recognized as a more powerful examination of item and scale performance, which can inform clinical decision-making [21]. Moreover, the Rasch model features a sample-free model [22] because the outcome of the Rasch analysis is governed only by the person’s ability and the item’s difficulty. In this regard, even though the current study only examined the DrVac-COVID19S on university students, the findings can be generalized to nonstudents due to the sample-free principle.