Explore chapters and articles related to this topic
Concluding Remarks
Published in Song S. Qian, Mark R. DuFour, Ibrahim Alameddine, Bayesian Applications in Environmental and Ecological Studies with R and Stan, 2023
Song S. Qian, Mark R. DuFour, Ibrahim Alameddine
The development of biostatistics started from individual studies, for example, Pierre C.A. Louis's numerical analysis of the cause of child-bed fever and the effects of bloodletting (who can forget his exasperated response to his critics “Quels faits! Quelle logique!”). These studies, which shared common features within the disciplines and data, led to case studies, new theories, and development of common methods. The process is exemplified in Fisher's examination of agriculture experimental data that led to randomized experiments and ANOVA. Ultimately, the statistical theories and methods become part of life science disciplines. For example, the concept of the randomized experiment is part of many biological studies and the modern clinical trials underpinning the development of all new medicines and treatments. Modern medicine and biology cannot be separated from biostatistics.
Abortion's Causal Role in Trauma and Suicide
Published in Nicholas Colgrove, Bruce P. Blackshaw, Daniel Rodger, Agency, Pregnancy and Persons, 2023
The review initially claims that the question of whether abortion harms mental health is “not scientifically testable” (p. 87). This is a strange claim which appears to render the conclusion of no association inevitable. The reasoning is that randomized experiments are not ethically possible. This is true but does not mean that the question is untestable. It simply means that the very best kind of evidence is not available for this question. We can still have good evidence.
Causal Inference for Observational Studies/Real-World Data
Published in Harry Yang, Binbing Yu, Real-World Evidence in Drug Development and Evaluation, 2021
Randomized experiment is seemingly a perfect tool for causal inference, but its utility is rather limited for the following reasons: First, because it is conducted in a restrictive manner with selected participants, it is hard to generalize the findings to the target population. Second, it may be time consuming and resource craving, especially for large randomized clinical trials. Third, there is no guarantee that the study will be executed perfectly. Any violation to the protocol, such as loss of follow-up or noncompliance, may make the randomized study more or less like an observational one. Last, it is not possible to run randomized studies to answer all causal question, i.e., evaluation of healthcare system or health policy, the impact of off-label drug use, and so forth. The lack of depth and breadth of causal inference with randomized studies in many situations brings the use of observational data to the front stage.
Teacher victimization, turnover, and contextual factors promoting resilience
Published in Journal of School Violence, 2019
F. Chris Curran, Samantha L. Viano, Benjamin W. Fisher
In an ideal study, we would be able to isolate the effects of experiencing victimization on turnover by randomly assigning teachers to experience victimization or not. In other words, we would be able to estimate the effects of victimization holding all else constant. Given, however, that such a study is not logistically or ethically possible, we instead employ a series of statistical models using secondary data to attempt to as nearly approximate the ideal of a randomized experiment as possible. We recognize that teachers who experience victimization and schools where victimization occurs may be different from those without victimization in a number of ways. The goal of our analytic approach is to account for as many of these differences as possible in order to more accurately identify the portion of turnover that is driven by victimization. In this section, we outline such an analytic approach that accounts for a number of both observable and unobservable potential confounders.
itMatters: Optimization of an online intervention to prevent sexually transmitted infections in college students
Published in Journal of American College Health, 2022
David L. Wyrick, Amanda E. Tanner, Jeffrey J. Milroy, Kate Guastaferro, Sandesh Bhandari, Kari C. Kugler, Shemeka Thorpe, Samuella Ware, Alicia M. Miller, Linda M. Collins
MOST is an engineering-inspired framework for use in intervention science (see details in Collins30) In the classical treatment package approach that has formed the basis of much of intervention science, an investigator proceeds directly from the preparation phase to the evaluation phase (in a randomized control trial (RCT)), with the implicit assumption that all of the components identified in the preparation phase will be included in the intervention. In contrast, MOST comprises three phases: preparation, optimization, and evaluation. The preparation phase involves establishing a detailed conceptual model that provides the basis for the intervention under development; identifying the components that are candidates for inclusion in the intervention; and pilot testing the components. In the optimization phase of MOST, one or more randomized experiments, called optimization trials, are conducted. The purpose of the optimization trials is to assess the effect of individual intervention components, and possibly, depending on the experimental design used, to examine whether the presence, absence, or level of one component has an impact on the performance of others. In MOST, intervention components are often referred to as candidate components, because it is not a foregone conclusion that any component will be a part of the intervention package. Instead, the eligibility of candidate components for inclusion in the intervention is determined by their performance in the optimization trial(s). Once the optimized intervention has been identified, it can then be evaluated by means of a standard RCT in the evaluation phase of MOST.
Fundamentals of randomized designs: AMEE Guide No. 128
Published in Medical Teacher, 2020
Tanya Horsley, Eugene J. F. M. Custers, Martin G. Tolsgaard
Randomized experiments serve a particular design purpose and should be aligned thoughtfully with appropriate question and intent. Findings from randomized studies traditionally produce knowledge to answer questions that explore ‘what works’ and in whom; studies of how and in what circumstances may require alternative methodologies (e.g. qualitative approaches). The strength of random allocation is to reduce bias of known and unknown variables. However, the approach requires special consideration when applied in medical education research, in particular regarding generalizability of study results and the use of theory to inform the research question.