Explore chapters and articles related to this topic
Digital Health and New Technologies
Published in Connie White Delaney, Charlotte A. Weaver, Joyce Sensmeier, Lisiane Pruinelli, Patrick Weber, Deborah Trautman, Kedar Mate, Howard Catton, Nursing and Informatics for the 21st Century – Embracing a Digital World, 3rd Edition, Book 1, 2022
Challenges of using new technologies are many despite their growing use in digital health. These include ethics, trust and information security. Because digital health is experiencing fast growth, complications with digital health tools are of great concern at the social and government levels as they can impart human bias, inequity and inaccessibility to individuals, unsolicited sharing of data. These digital health complexities can lead to a lack of adoption by clinicians and patients. Intelligent, automated digital health solutions can introduce human bias as data inputs into the models may include only subsets of populations. Examples of these subsets may include sex and gender, race, social status, geographic locations, and those without access to healthcare (Howard & Borenstein, 2018). For example, doctors heuristically diagnose heart attacks based on symptoms that men experience more commonly than women. Therefore, women are consequently underdiagnosed for heart disease (Rowe, 2021). If this bias exists in the data inputs of automated models that predict and offer suggestions to clinicians, the output that augments decision-making can perpetuate inequality, lead to wrong diagnosis and ineffective treatments and harm equating to low-quality and unsafe patient care delivery. Diversity in data and larger data sets are vital to advance and ensure fairness in using new technology solutions (Kaushal et al., 2020).
Invariance
Published in Trevor G. Bond, Zi Yan, Moritz Heene, Applying the Rasch Model, 2020
Trevor G. Bond, Zi Yan, Moritz Heene
We take care to use the more modest term ‘linking’ rather than ‘equating’ when we use Rasch principles to examine invariance of person or item estimates. The use of the term ‘test equating’ is ubiquitous, most often as shorthand for ‘test score equating’: What would Bill’s result be on Form A, if he were given Form B? If Forms A and B were actually constructed as equivalent tests; same number and distribution of items, same mean, same SDs, then post hoc ‘test score equating’ would merely be a quality control procedure. So, although our Rasch-based procedures focus on the key measurement principle of invariance, we call the technique ‘linking’ so it won’t be mistaken for the possible confused uses of ‘equating’. Separate samples of persons can be linked across common items, thereby checking item invariance; separate samples of items can be linked across common persons, thereby checking person invariance.
Designing Rater-Mediated Assessment Systems
Published in George Engelhard, Stefanie A. Wind, Invariant Measurement with Raters and Rating Scales, 2017
George Engelhard, Stefanie A. Wind
In the context of rater-mediated assessments, equating procedures are based on data collection designs that involve creating links between raters in order to perform transformations that control for differences in rater severity. These data collection designs can be considered within the framework of experimental design and analysis of variance (ANOVA; Kirk, 1995), and can be viewed as block designs in which ratings are replications within each cell of a rater-by-task design. The next section describes three major types of rating designs that are commonly applied in research and practice (Wind & Peterson, 2017), along with their implications: (1) fully crossed designs, (2) linked designs, and (3) disconnected designs.
The pursuit of fairness in assessment: Looking beyond the objective
Published in Medical Teacher, 2022
Nyoli Valentine, Steven J. Durning, Ernst Michael Shanahan, Cees van der Vleuten, Lambert Schuwirth
McNamara was blindsided by the data, convinced the USA was winning the war despite his commanders telling him the exact opposite (Carmody 2019). Similarly, within clinical practice, equating ‘quality’ with someone who strictly adheres to guidelines or protocols, is to overlook the evidence on the more sophisticated process of expertise (Greenhalgh et al. 2014). With regard to assessment, any effort to prioritise and quantify one aspect of trainees’ qualities, such as knowledge, will inevitably reduce the emphasis on other aspects which might be deemed important (Eva 2015). It could actually be argued that objectivity can reduce fairness because it only measures what can be measured by a quantitative value. This is unfair to learners with broader skills and unfair to society who highly value unquantifiable competencies, such as compassion, kindness, and courage in their health care professionals (O'Mahony 2017; Wayne et al. 2020). These skills, as well as other not easily quantifiable skills, such as communication, collaboration, and professionalism are often the ones needed within our health care systems (Frank et al. 2010). Such reductionist approaches may also carry the risk of negatively impacting student learning behaviour. Cilliers et al. demonstrated that the effects of assessment on student learning are complex. Overreliance on ‘objectivity’ and quantitative results were perceived as punitive and unfair, and encouraged students’ learning activities to be directed to passing assessments rather than learning to become a good clinician (Cilliers et al. 2010; Cilliers et al. 2012).
Biomarkers in temporomandibular disorder and trigeminal neuralgia: A conceptual framework for understanding chronic pain
Published in Canadian Journal of Pain, 2020
Tina L. Doshi, Donald R. Nixdorf, Claudia M. Campbell, Srinivasa N. Raja
Recently, exploration of the human genome, epigenome, transcriptome, proteome, and metabolome has become possible with the availability of reliable high-throughput technologies, sparking increased interest in so-called omics biomarkers. Researchers can now extract prodigious quantities of information from a single patient, or even a single cell, to develop a comprehensive, personalized biomarker profile. This approach allows many potential biomarkers to be studied at the same time from very small sample quantities. Data from these RNAs, proteins, or metabolites provide information about the function and functionality of entire pathways, giving investigators a perspective on disease that is both broad and detailed. However, the information obtained is only as valid as the source of the information; poor patient selection, poor sample selection, and poor sample collection may all yield misleading results. Researchers must also guard against the trap of equating statistical significance with clinical significance. Analyzing the sheer volume of data produced from these assays requires advanced statistical and computational skills, but even an excellent statistical analysis can fail to produce useful biomarkers. Consequently, the identification of valid, practical biomarkers requires an approach that balances statistical rigor with expert knowledge about the scientific underpinnings of disease.94
Response to Open Peer Commentaries on “Responding to Those Who Hope for a Miracle: Practices for Clinical Bioethicists”
Published in The American Journal of Bioethics, 2018
Trevor M. Bibler, Myrick C. Shinall, Devan Stahl
DeLisser (2018) makes the point that our taxonomic analysis has not been validated by empirical studies, and DeBaets states that our work “lacks justification” because we “offer no specific data.” DeLisser's point is not intended to show that our taxonomy has little positive value, as he commends the framework. He states that future qualitative studies would benefit our taxonomic analysis. We agree wholeheartedly. When we write that we hope our analysis “acts as a catalyst for continued conversation,” this includes expanding the conversation not only to others who use miracle language, but also to employing empirical methods (Bibler et al. 2018). Second, we disagree that a project must be anchored in some general idea of “data” in order to be justified. Equating a justified position with a position found in qualitative or quantitative data reduces ethics to an endeavor that leaves little room for work that centers around experience, narrative, conceptual analysis, and philosophical considerations. We should hope that the empirical turn in bioethics has not driven us into a ravine where all work that is anchored in experience or conceptual analysis is labeled “unjustified.”