Explore chapters and articles related to this topic
Multicultural Assessment for the Twenty-First Century
Published in Walter J. Lonner, Dale L. Dinnel, Deborah K. Forgays, Susanna A. Hayes, Merging Past, Present, and Future in Cross-Cultural Psychology, 2020
Ronald J. Samuda, John E. Lewis
These individually-administered tests, originally standardized on norming populations that tended to focus on white middle-class individuals, have been subjected to heated debate and criticism precisely because of the initial homogeneous nature of the standardization samples that excluded minorities. Those two tests have since been restandardized in the form of the Stanford-Binet (SB-LM and SB-R) and the Wechsler Scales, WAIS-R and WISC-R to include minorities. However, the restandardized models are still subjected to critical scrutiny because of the paucity of sociodemographic variables in sampling procedures. Thus as Dana (1993) demonstrates “these tests apparently measure the construct of intelligence somewhat differently across groups” (Dana, 1993, p. 186). In other words, they lack cross-cultural construct validity. Despite limitations, the standardized tests can be useful when used with the care and the protocol that Jerome Sattler (1992) recommended is followed.
Designing and Implementing Research on the Development Editions of the Test
Published in Lucy Jane Miller, Developing Norm-Referenced Standardized Tests, 2020
Standardized tests may be defined simply as tests that use “standardization procedures for administering and scoring.”6 These standardized procedures allow the test developer to control the testing conditions to minimize the differential effects of factors that might influence test results such as: examiners, settings, time of day, and motivation of the subject. When such factors are controlled, the examiner can have more confidence that the results obtained are in fact comparable to the normative data reported for the test.
Tailoring Teaching to the Elderly in Home Care
Published in Barbara J. Horn, Facilitating Self Care Practices in the Elderly, 2019
Martha lies Worcester, Ann Loustau, Kathleen O’Connor
Testing intelligence is based on standardized tests that have been designed to forecast academic and professional success for younger populations. Intelligence has been defined as the results of performance on those standardized tests. When educational level was taken into account there were few differences on test results through age sixty. After age 70 test results did decline, but most researchers do not consider the decline to be a valid reflection of the older person’s intellectual ability. The standardized tests are inadequate for three reasons: (a) they are designed for the younger person’s context, (b) they are usually conducted with elderly who are living in environments very different than those of the younger age group, and (c) they are given without consideration for cohort differences (Willis & Baltes, 1980).
Assessments of Functional Cognition Used with Patients following Traumatic Brain Injury in Acute Care: A Survey of Australian Occupational Therapists
Published in Occupational Therapy In Health Care, 2023
Katherine Goodchild, Jennifer Fleming, Jodie A. Copley
When occupational therapy respondents described how they assess cognitive function in acute care TBI, the two dominant assessment methods used were non-standardized observation of functional tasks and non-standardized carer-report/self-report interviews. Despite being aware of some of the standardized performance-based assessments available and being generally supportive of their use, most occupational therapists indicated that they still chose non-standardized options. Regular use of non-standardized assessments by occupational therapists is consistent with the findings of a Norwegian study by Stigen et al. (2018) and an Australian study by Sansonetti and Hoffman (2013). Bottari and Dawson (2011) suggested that, while having the advantages of being quick and easy to administer, non-standardized tests lack a consistent means of administration and scoring and do not have normative data, adding to differences in test interpretation among different clinicians and settings. As a consequence, non-standardized test results can be difficult to interpret, may contain errors and may lack credibility (Bottari & Dawson, 2011).
Domains and measures of social cognition in acquired brain injury: A scoping review
Published in Neuropsychological Rehabilitation, 2022
Kimberley Wallis, Michelle Kelly, Sarah E. McRae, Skye McDonald, Linda E. Campbell
There is consistency with our explored measures and those used in research, recommended as outcome instruments in research and by clinicians. PoFA, FPRT and IRI comprise three of the top four identified measures reported in neuropsychiatric reviews (Eddy, 2019). The use of the LCQ, discourse tasks, TASIT and IRI align with the research outcome instrument recommendations for communication and social cognition in moderate-to-severe TBI as having sound psychometric properties, well-established normative data and ease of administration (Honan et al., 2019). Of these, the LCQ and TASIT were reported by clinicians as the most likely to identify social cognition impairments (Kelly et al., 2017a). Despite the availability of these measures, clinicians utilized informal assessments (e.g., interviews with family/client), reporting that they infrequently or never used standardized tests, with a lack of reliable and appropriate tools identified as the greatest barriers (Kelly et al., 2017b). However, this review has not only identified several psychometrically appropriate measures, but readily available measures for clinicians that can be used to reduce the gap between research and clinical practice.
Modified script training for nonfluent/agrammatic primary progressive aphasia with significant hearing loss: A single-case experimental design
Published in Neuropsychological Rehabilitation, 2022
Kristin M. Schaffer, Lisa Wauters, Karinne Berstis, Stephanie M. Grasso, Maya L. Henry
The goal of this study was to assess the impact of VISTA-R on treatment outcomes (script production accuracy, speech intelligibility, grammatical complexity, mean length of utterance, and speech rate) for an individual with nfvPPA and severe-to-profound hearing loss that was not fully corrected. Specifically, our research questions and hypotheses were: Question 1) Will an individual with residual, uncorrected hearing loss and nfvPPA benefit from VISTA with multimodality input to compensate for hearing loss? We predicted that the participant would demonstrate a positive treatment response, supported by a large effect size and significant improvement on the primary outcome measure, with maintenance up to one year post-treatment. Additionally, we predicted that the participant’s standardised test performance at post-treatment and follow-up time points would be relatively stable, as was observed in the previous VISTA study. Question 2) Will this participant’s treatment response on the primary outcome measure and other discourse measures be comparable to a cohort of individuals with functional hearing who participated in VISTA previously (Henry et al., 2018)? We predicted that pre- to post-treatment change on the primary and secondary outcome measures would be commensurate with outcomes from the original VISTA cohort.