Explore chapters and articles related to this topic
The ultimate challenge for risk technologies
Published in Jane Summerton, Boel Berner, Constructing Risk and Safety in Technological Practice, 2003
So if the first effect of epidemiological constructions is that risk management is individualized, the second effect is that dangers are separated from the uncertain and fluid life-world in which they are experienced, and fragmented and reified as specific “risks” to be calculated, catalogued and monitored. Within the BMA Guide to Living with Risk (BMA 1990), one can look up the risk of death from flying in different kinds of aircraft, the risk of serious injury from playing different sports or the risk of adverse drug reactions from different kinds of drugs. Once reified as individual risks rather than the diffuse hazards one might face throughout the normal business of everyday life, behaviors and events can then be subject to studies of potential interventions. In public health medicine, the most prized knowledge is that from the randomized controlled trial (RCT), an experiment in which interventions are tested in conditions that “control for” the other features of everyday life that would potentially interfere with the findings. Educational interventions or artifacts designed to reduce risk can be tested to see if they do indeed reduce the number of accidental injuries. In accident research, the body of evidence from experimental studies is growing, and there have been calls for increasing the number of interventions that are “tested” in this way for their potential in reducing injuries.
Designing AI Systems for Clinical Practice
Published in Lia Morra, Silvia Delsanto, Loredana Correale, Artificial Intelligence in Medical Imaging, 2019
Lia Morra, Silvia Delsanto, Loredana Correale
Randomized controlled trials (RCTs) are designed as experiments with high internal validity - the ability to determine cause-effect relationships. These experiments employ comprehensive designs to control for most, if not all, sources of bias (systematic errors) by means of randomization, blinding, allocation concealment, etc. Usually, extended inclusion and exclusion criteria are used to identify a clearly defined population group of participants who would benefit from the intervention under investigation. Although the above experimental design, if correctly applied, leads to well-controlled trials with statistically credible results, their applicability to real-life practice may be questionable.
Preparing healthcare, academic institutions, and notified bodies for their involvement in the innovation of medical devices under the new European regulation
Published in Expert Review of Medical Devices, 2022
Francesco Garzotto, Rosanna Irene Comoretto, Lorenzo Dorigo, Dario Gregori, Alessandro Zotti, Gaudenzio Meneghesso, Gino Gerosa, Mauro Bonin
One aspect that could have a significant impact in the evidence generated under MDR is the replacement of the term ‘effectiveness,’ used in the FDA and in pharmaceuticals, with ‘performance’: ‘the ability of a device […] to achieve its intended purpose as claimed by the manufacturer […] when used as intended by the manufacturer’ [2]. This may restrict the observations to selected patients under specific conditions and users with a loss of generalizability in real-world practice. It is beyond our scope to elaborate on this sensitive issue, but it is important to clarify what these terminologies mean in practice. The main measures in order to assess the clinical and economical goodness of a device are the efficacy and effectiveness. The first defines the capacity of an intervention for producing a desired result or effect under ideal circumstances (‘Can it work?’), while performance is even more restrictive. Randomized controlled trials (RCTs) represent the gold standard for explanatory studies and efficacy assessment, conducted under strict and specific criteria to control for several sources of bias, thus limiting their generalizability to a broader population in real-world practice settings. In addition to clinical trials, also registries containing real-world evidence could be useful if of sufficient quality [16] or if systematically characterized and aggregated. The effectiveness, on the other hand, describes how well an intervention works in real healthcare/clinical practice (‘Does it work in practice?’) [5]. Pragmatic and observational trials are usually used for effectiveness assessment, as they allow to test interventions in the full spectrum of everyday clinical settings to maximize applicability and generalizability [17]. A generalization on how to evaluate devices during the various stages of their development is difficult to define, especially in this era of personalized medicine. Any field of medicine should hence establish specific rules as in the case of IDEAL-D [18], a recently proposed surgery framework for regulating the introduction, evaluation, and use of implantable devices. Initially developed for surgical procedures, it has been extended to the MD sector.
Variability in exercise physiology: Can capturing intra-individual variation help better understand true inter-individual responses?
Published in European Journal of Sport Science, 2020
Oliver J. Chrzanowski-Smith, Eva Piatrikova, James A. Betts, Sean Williams, Javier T. Gonzalez
The ideal method is to conduct a replicated randomised controlled trial in the same participants, together with repeated testing within each treatment period (Hecksteden et al., 2015; Senn et al., 2011; Voisin et al., 2018). Here, participants are randomly allocated to the intervention or control (or the order of receiving these conditions if a crossover design) as per a typical randomised controlled trial (RCT). However, upon completion and after an adequate washout period, the study is essentially repeated in the same participants to examine if individuals demonstrate a consistent response to the intervention relative to control. Clearly, this poses considerable logistical and feasibility challenges at both the level of the participant and researcher(s). An alternative is to implement one of these approaches alone i.e. either replicate the intervention or have repeated testing pre- and/or post-trial. While such approaches present similar challenges, several studies have adopted replicated designs (Goltz et al., 2018, 2019; Lindholm et al., 2016; Senn et al., 2011). For example, Goltz et al. (2018) found in a replicated, randomised crossover experimental design that true inter-individual differences in subjective appetite and blood hormonal responses to acute exercise were apparent in fifteen healthy males, exceeding measurement error and biological error. Similarly, a more recent randomised replicated cross-over study by Goltz et al. (2019) also found true inter-individual differences in postprandial appetite responses to a standardised breakfast in eighteen healthy males. Moreover, a similar elegant design was also employed in a knee extension training programme where subjects were their own control through exercising one-leg initially followed by a washout period and then two-leg training (Lindholm et al., 2016). While Lindholm et al. (2016) found the response of a large fraction of genes only changed in one training period, indicating intra-individual variation, unfortunately, inter-individual response differences were not explored. Nevertheless, the appearance of such study designs shows a move towards the importance of measuring intra-individual variation to determine whether true inter-individual response differences exist.