Explore chapters and articles related to this topic
The Basics of Statistical Tests
Published in Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen, Statistical Reasoning for Surgeons, 2020
Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen
The p-value tells us how likely it is that we observed the differences we did, if it were true that random variation is the only explanation. (This is one reason not to use the p-value as a target – if you’ve kept sifting through analyses until you get an acceptable p-value, the act of searching takes the probability out of the p-value.) The lower the p-value, the less credible the null hypothesis. But a high p-value cannot be seen as proof that the null hypothesis is true, only that the data is insufficient to reject it. There is an additional assumption: the statistical test was appropriate for the problem at hand. You may remember from statistics class that Type I error is rejecting the null hypothesis when it is actually true, and Type II error is not rejecting the null hypothesis when it is in fact false. It may be said that Type III error may have had the wrong null hypothesis in the first place – the study began with the wrong research question.
Introduction
Published in Shein-Chung Chow, Innovative Statistics in Regulatory Science, 2019
When performing a hypotheses testing, basically two kinds of errors (i.e., type I error and type III error) occur. Table 1.3 summarizes the relationship between type I and type II errors when testing hypotheses.
Decision Analysis from the Perspectives of Single and Multiple Stakeholders
Published in Zoran Antonijevic, Robert A. Beckman, Platform Trial Designs in Drug Development, 2018
Robert A. Beckman, Carl-Fredrik Burman, Cong Chen, Sebastian Jobjörnsson, Franz König, Nigel Stallard, Martin Posch
Decision analysis could also in principle be used to quantify the utility advantage of master protocols compared to individual trials. In order to quantify the benefits of master protocols, let’s look at them in the context of a portfolio. A basket trial is a portfolio of indications with a common treatment and biomarker, while an umbrella trial is a portfolio of treatments and associated biomarkers in one broader indication. As we have seen in the context of POC trials, it may be more cost-effective to test more hypotheses within the portfolio if they are of approximately equal merit. One must consider the Type III error, or missed opportunity from not investing in trials (or arms of a master protocol) that might have identified good indications/treatments. It is possible, although still not investigated, that similar principles may apply to Phase 3 approval studies and to master protocols in both the Phase 2 and Phase 3 settings.
The effects of multimodal rehabilitation on pain-related sickness absence – an observational study
Published in Disability and Rehabilitation, 2018
Hillevi Busch, Elisabeth Björk Brämberg, Jan Hagberg, Lennart Bodin, Irene Jensen
In conclusion, the national roll-out of multimodal rehabilitation did not fulfill the goal of reducing sickness absence. The assumption that rehabilitation does not affect sickness absence may, however, be over-hasty and may result in a so-called type-III error. The current study was carried out when the rehabilitation guarantee was being implemented, and it is known that implementation is a slow and unpredictable process [32]. The implementation of the rehabilitation guarantee was a major challenge mainly for primary care, which became responsible for rehabilitating persons who were previously referred to specialized rehabilitation units. The implementation involved new working constellations and routines, going from individual work to multi-professional co-operation, competition with private actors, going from reducing symptoms and improving quality of life to rehabilitating with the primary goal of work return. Hence, the lack of result may also be due to how the rehabilitation was implemented in the county councils and at the health care units. Therefore, questions of program fidelity could not be addressed in the current study. However, a process evaluation of the rehabilitation guarantee indicates that few instructions about how to apply a vocational approach were offered to the health care units. Limited contacts between the rehabilitation setting and the work place, as well as a lack of guidelines and experience of vocational rehabilitation may partly explain why the sickness absence of rehabilitants was not reduced more rapidly [15].
Process Evaluation for Stressor Reduction Interventions in Sport
Published in Journal of Applied Sport Psychology, 2019
Raymond Randall, Karina Nielsen, Jonathan Houdmont
As already mentioned, process evaluation can be used to examine a wide range of events occurring both within and around an intervention. At its most basic level, process evaluation resembles a manipulation check used in experimental psychology, often expressed as a dichotomous variable of intervention exposure versus nonexposure (Randall et al., 2005). Measuring intervention exposure can be particularly important when interventions are delivered by third parties or across a wide range of locations, or in any other circumstances that result in the psychologist having limited control over the delivery of the intervention (Cox et al., 2007). Process evaluation data at this level may come from administrative records of participant attendance during intervention delivery and some audit of adherence to the delivery of the intervention activities such as researchers’ observations (Nielsen & Randall, 2013; Rumbold, Fletcher, & Daniels, 2018). These data can then be used to avoid drawing the incorrect conclusion that an intervention is ineffective (i.e., theory failure) when implementation failure undermined its impact: This is a Type III error (Cook & Campbell, 1979; Dobson & Cook, 1980). Low levels of intervention fidelity (i.e., large differences between the intervention delivery and intervention plan) and exposure are likely to be symptoms of other problems with intervention processes that also need to be resolved if it is to have a chance of success (von Thiele Schwarz, Lundmar, & Hasson, 2016). Process evaluation can be used to identify the reasons for low fidelity (e.g., low levels of management support for the intervention or lack of knowledge, skills, or confidence among those involved in its design and implementation). This information can then be used to resolve these issues before intervention effects are undermined (see Methodological and Practical Implications section).