Explore chapters and articles related to this topic
Assessment of Vision-Related Quality of Life
Published in Ching-Yu Cheng, Tien Yin Wong, Ophthalmic Epidemiology, 2022
Eva K. Fenwick, Preeti Gupta, Ryan E. K. Man
A nimble solution to these issues is computerized adaptive testing, which is a method for administering items (questions) from a calibrated item bank.111 CAT iteratively administers from the bank to the respondent items that are selected based on their level of impairment (i.e., it choose the items that will provide the greatest amount of information).112 Subsequent items are selected based on the examinee’s previous responses and selection proceeds until a pre-defined stopping criterion (e.g., measurement precision or number of items) is reached.113 This ensures that items are tailored to the individual’s level of impairment.111 As such, compared to PROMs with a fixed set of items, CATs require fewer items and less time to arrive at equally precise scores; and reduce test-taker burden by asking relevant questions and tailoring the test to test-taker ability level. CAT also facilitates automated data entry and scoring.
Reliability II: Advanced methods
Published in Claudio Violato, Assessing Competence in Medicine and Other Health Professions, 2018
Advantages of IRT compared to both CTT and G-theory IRT provides several improvements in assessing items and peopleDifficulty of items and the ability of people are scaled on the same metricDifficulty of an item and the ability of a person can be comparedIRT models can be test and sample independent thus providing greater flexibility where different samples or test forms are usedThese qualities of IRT are the basis for computerized adaptive testing.
Tests
Published in Louis Cohen, Lawrence Manion, Keith Morrison, Research Methods in Education, 2017
Louis Cohen, Lawrence Manion, Keith Morrison
Computerized adaptive testing (Wainer and Dorans, 2000; Aiken, 2003; Wainer, 2015) focuses on which particular test items to give to participants, based on their responses to previous items. It is particularly useful for large-scale testing, where a wide range of ability can be expected. Here a test is devised that enables the tester to cover this wide range of ability, hence it must include some easy to some difficult items; too easy and it does not enable a range of high ability to be charted (testees simply getting all the answers right); too difficult and it does not enable a range of low ability to be charted (testees simply getting all the answers wrong). We find out very little about a testee if we ask a battery of questions which are too easy or too difficult. Further, it is more efficient and reliable if a test can avoid the problem for high-ability testees of having to work through a mass of easy items in order to reach the more difficult items and for low-ability testees of having to try to guess the answers to more difficult items. Hence it is useful to have a test that is flexible and that can be adapted to the testees. For example, if a testee found an item too hard the next item could adapt to this and be easier, and, conversely, if a testee was successful on an item the next item could be harder.
Item reduction of the patient-rated wrist evaluation using decision tree modelling
Published in Disability and Rehabilitation, 2020
Mark J.W. van der Oest, Jarry T. Porsius, Joy C. MacDermid, Harm P. Slijper, Ruud W. Selles
The decision tree approach used in the present study has a number of advantages. A first advantage is that we were able to maintain both subscores (pain and disability) and maintain the multidimensionality of the original PRWE. For example the QuickDash and Brief MHQ did not maintain subscores. In addition, maintaining this multidimensionality makes possible to combine and compare data from the DT-PRWE with data from the full questionnaire, since previously completed questionnaires of the full version can be converted to the DT-PRWE questionnaire score. Since a decision thee questionnaire can only be administered electronically, we made an electronic version of the questionnaire, available as download, in the open source LimeSurvey software to facilitate use of the DT-PRWE. This questionnaire can be administered using an internet connection or can be completed offline. In contrast, computerized adaptive testing (CAT) based on item response theory uses a continuous connection with a server to administer the questionnaire. Another advantage of CHAID over CAT is the potential efficiency in reducing items, as has been shown in previous research [21].
Global rating of change: perspectives of patients with lumbar impairments and of their physical therapists
Published in Physiotherapy Theory and Practice, 2019
Ying-Chih Wang, Bhagwant S. Sindhu, Jay Kapellusch, Sheng-Che Yen, Leigh Lehman
Development, simulation, validation, use, and clinical interpretation of the 25-item LCAT survey have been described (Hart, Mioduski, Werneke, and Stratford, 2006; Hart et al., 2012, 2010; Wang et al., 2010). Briefly, the item bank for the LCAT was developed using items from the Back Pain Functional Scale (Stratford and Binkley, 2000; Stratford, Binkley, and Riddle, 2000), physical functioning (PF) from the SF-36 (Ware and Sherbourne, 1992), and selected PF items from other scales. LCAT (Hart, Mioduski, Werneke, and Stratford, 2006; Hart et al., 2012, 2010; Wang et al., 2010) is a body part-specific survey that is administered using the computerized adaptive testing application. It was designed to efficiently evaluate each patient’s functional status (FS) for patients with lumbar impairments seeking rehabilitation in outpatient physical therapy clinics. Unlike the traditional fixed-length survey in which each patient answers all questions, computerized adaptive testing is a form of computer-based test administration in which each patient takes a customized test where a computer administers items tailored to the current estimate of the patient’s ability. FS, as measured from the LCAT, represents the activity domain of the International Classification of Functioning, Disability, and Health (World Health Organization, 2008) framework.
Comprehensive clinical sitting balance measures for individuals following stroke: a systematic review on the methodological quality
Published in Disability and Rehabilitation, 2018
Melissa Birnbaum, Keith Hill, Rita Kinsella, Susan Black, Ross Clark, Kim Brock
Four measures that contain sitting items as part of balance scales, including the Balance Computerized Adaptive Testing,[26] the Brunel Balance Assessment,[27] the Hierarchical Balance Short Forms [28] and the revised Postural Control and Balance for Stroke measure [29] were identified. Whilst some aspects of reliability and validity have been investigated for the sitting components of these scales, the overall quality of all of the measurement properties evaluated for the sitting items of these balance measures was predominantly rated as poor (37%) to fair (42%) using the COSMIN guidelines. The exception to this was two articles that utilised item response theory to investigate the internal consistency and structural validity of the Balance Computerized Adaptive Testing,[26] and the content and structural validity of the subsequently developed Hierarchical Balance Short Forms.[28]