Explore chapters and articles related to this topic
Documenting the Impact of Hospice
Published in Inge B. Corless, Zelda Foster, The Hospice Heritage: Celebrating Our Future, 2020
Another challenge is the limited number of well-crafted measures. It is difficult to construct instruments that balance clinical relevance and research rigor. Tools that are appealing to clinicians, because they capture the essence of hospice care and provide useful data for care planning, may not meet the researchers’ rigorous tests for reliability and validity needed in population studies.
Evidence-based practice
Published in Jeremy Jolley, Introducing Research and Evidence-Based Practice for Nursing and Healthcare Professionals, 2020
Most of us have experienced the implementation of something new in our practice. We will have felt the excitement of it or if we are a little older and stayed in our ways, that dreaded feeling of another ‘initiative’ wafting towards us on the wind of research or more likely ‘policy change’. Most of us know how difficult it is to change practice; practice resists change, just as people do. We have felt the resistance, that routine is ‘safe’, safe for everyone. So, we know how difficult it is to facilitate change. The implementation frameworks described briefly above deal somewhat awkwardly with the hugely complex nature of healthcare practice; indeed, how can it be possible to define such a varied thing in a few constructs or with a simple diagram. To be fair, all of those who have struggled to develop the frameworks discussed here have been at pains to point out the complexity of healthcare practice and the challenges inherent in formulating an approach to implementation. Indeed, there have been many theories, models and frameworks that have tried to deal with the issue of implementing change (see Nilsen, 2015). However, the fact that implementing evidence-based practice is so complex does beg the use of a strategy. These ‘implementation frameworks’ are still developing but they are already proving themselves to be useful.
Validity II: Correlational-based
Published in Claudio Violato, Assessing Competence in Medicine and Other Health Professions, 2018
Construct validity focuses on the truth or correctness of a construct and the instruments that measure it. What is a construct? A construct is an “entity, process, or event which is itself not observed” but which is proposed to summarize and explain facts, empirical laws, and other data.3 In the physical sciences, gravity and energy are two examples of hypothetical constructs. Gravity has been proposed to explain and summarize facts such as planets in orbit, the weight of objects, and mutual attraction between masses. Gravity, defined as a process or force, cannot be directly observed or measured; only its effects can be identified and quantified. Energy, also an abstraction or construct, is used to explicate such disparate phenomena as photosynthesis in plants, illumination from a light bulb, and the engine that propels a jet liner.
Exploring the measurement of pediatric cognitive-communication disorders in traumatic brain injury research: A scoping review
Published in Brain Injury, 2022
Lauren Crumlish, Sarah J. Wallace, Anna Copley, Tanya A. Rose
A multitude of constructs (n = 2134) were identified (see Table 4) in this scoping review, prompting the question – are we always measuring what matters? For researchers, a lack of consensus with regard to the selection of constructs results in different constructs being measured, and a variety of instruments being used to measure the same construct. This heterogeneity causes difficulties in comparing constructs in systematic reviews and meta-analyses (86). For clinicians, a lack of consensus on important constructs for measurement may make it difficult to make an informed decision on key outcomes to measure. Existing work to develop a core outcome set for pediatric TBI research has begun (29), yet the heterogeneity identified within this scoping review may suggest that uptake has been poor. While CCDs are complex (30), and it may be important to measure multiple constructs in complex health conditions (87), the vast number and inconsistent selection of constructs may reflect inconsistencies that exist in construct definitions and a lack of understanding of the most important constructs to measure.
Systematic Review of the Psychometric Properties of the Saint Louis University Mental Status (SLUMS) Examination
Published in Clinical Gerontologist, 2022
Robert J. Spencer, Emily T. Noyes, Jessica L. Bair, Michael T. Ransom
Finally, cognitive screening scores must demonstrate sufficient validity to justify their use. To illustrate their validity, tests should include items that adequately cover the domain of interest (i.e., cognitive functioning), correlate with other measures of cognitive ability (construct validity), and allow users to confidently assess test-takers as being at risk for having compromised cognitive abilities (criterion validity). See Strauss, Sherman, and Spreen (2006) for a thorough review of validity considerations for neuropsychological tests. Regarding domain coverage, also known as content validity, 13 (43%) of the 30 points for the SLUMS pertain to verbal memory. By comparison, memory items comprise 10% of the MMSE and 17% of the MoCA. Other SLUMS items address attention/concentration (5 points), visuospatial/constructional abilities (5 points), language (4 points), and orientation (3 points). Construct and criterion validity are investigated through carefully designed empirical studies.
A Qualitative Study of Consumers’ Experiences of the Quality of Mental Health Services in Ghana
Published in Issues in Mental Health Nursing, 2022
Eric Badu, Anthony Paul O’Brien, Rebecca Mitchell, Akwasi Osei
In recent years, studies have highlighted several frameworks and concepts to measure the quality of services, including mental health services (Kilbourne et al., 2018; Pincus et al., 2016). The Donabedian model of quality has emerged over the years as a widely adopted measure of care. This theory measures the quality of care using three interrelated constructs, namely health systems structure, process and outcome. Studies have suggested that measuring the quality of mental health care requires the views of several actors and stakeholders, including consumers and family caregivers, providers, the public and the health care system (Badu et al., 2019a; Kilbourne et al., 2018). Specifically, Kilbourne et al. (2018) indicated that these actors need to provide input on the choice of measure that constitutes the quality and their implementation. Although the views of all these stakeholders are relevant, researchers and stakeholders have increasingly advocated for the need to focus on consumers’ perceptions or experiences of the quality of mental health services, through active participation by them in the services (Biringer et al., 2017; Millar et al., 2016).