Explore chapters and articles related to this topic
Preparing Studies for Statistical Analysis
Published in Lynne M. Bianchi, Research during Medical Residency, 2022
Luke J. Rosielle, Lynne M. Bianchi
Effect sizes are a critical factor in assessing the clinical significance of a study. An intervention may show a statistically significant effect, but this effect, as indicated by the effect size, might be so small that it is clinically insignificant. In clinical research, the effect size should represent a clinically meaningful difference. For example, a treatment that reduces the length of hospital stay by 30 minutes is unlikely to be considered clinically meaningful, even if it is statistically significant. However, a treatment that reduces length of hospital stay by 30 hours may be. It is up to the investigators and those to whom the results are communicated to decide whether an effect size is sufficiently large enough to warrant a change in policy or procedure. Box 7.2 includes some standard guidelines for “small,” “medium,” and “large” effects.
Common Statistical Issues in Ophthalmic Research
Published in Ching-Yu Cheng, Tien Yin Wong, Ophthalmic Epidemiology, 2022
The chance of making a type II error is called beta and this depends on the effect and sample size. The effect size is the value that is used to indicate difference between groups – for example, in a study looking at treatments for glaucoma with a primary outcome of intraocular pressure, the effect size may be the mean difference in intraocular pressure between treated and untreated patients at 12 months after randomization. A trial comparing treatments for age-related macular degeneration may look at the number of patients who gain vision and the effect size may be the odds ratio comparing the odds of sight gain in a treated group versus that in an untreated group. The sample size is the number of patients (or eyes, dependent on the unit of analysis) in the analysis. Because of the dependence of the P-value on sample size, when analyzing large datasets even very small effect sizes may be declared statistically significant and similarly, if analyzing a very small dataset, statistical significance is unlikely to be achieved even where there is a large effect size. Some effect sizes matter clinically, while others do not; for example, a difference between groups in intraocular pressure of 1 mmHg may not matter clinically whilst one of 10 mmHg may do. It is important therefore to clearly distinguish statistical significance from clinical significance and yet in ophthalmic research this is not always done. The mantra is that statistical significance is not the same as clinical significance.
How Much Data Is Enough?
Published in Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen, Statistical Reasoning for Surgeons, 2020
Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen
A statistician named Jacob Cohen suggested that in the absence of relevant data, each test could use a pre-defined “small,” “medium,” or “large” effect size [24, 25]. This is controversial because it is a cookbook approach that does not consider the actual data; it can be a useful way to jump-start a study, but the researcher should consider whether plausible and meaningful measurable effects are consistent with Cohen’s suggested effect sizes. There is also a surprising argument for the use of Cohen’s “medium” effect size of 0.5 for (mean difference)/SD – in a review of patient-reported outcomes [26], it was found that the minimally important difference divided by the standard deviation per test tended to be about 0.495 (with an SD of 0.15 across tests); the authors suggested an explanation might be that half a standard deviation was the bottom limit of human discrimination. Cohen stated [24] he selected 0.5 as a “medium” effect size because it was “likely to be visible to the naked eye of a careful observer.”
The impact of social influence on perceived usefulness and behavioral intentions in the usage of non-pharmaceutical interventions (NPIs)
Published in International Journal of Healthcare Management, 2023
Matti J. Haverila, Caitlin McLaughlin, Kai Haverila
Extant research has indicated that statistical significance is not enough when reporting the results and that effect size should also be reported [56,57]. The effect size may, in fact, be the most important finding in the statistical analysis as, with a sufficiently large sample size, statistical testing can find significant differences that are meaningless in practice. For that reason, the reporting of the p-values is insufficient [58]. Furthermore, the effect size is not influenced by sample size, and therefore it is comparable across different research papers [59]. Previous literature has denoted that the values of 0.02, 0.15, and 0.35 indicate the exogenous constructs to have small, medium, or large effect sizes respectively [68]. In addition to the examination of the path coefficients, and effect sizes, the total and indirect effects were also examined (Table 7).
The effects of psychosocial factors on occupational accidents: a cross-sectional study in the manufacturing industry in Turkey
Published in International Journal of Occupational Safety and Ergonomics, 2022
Süleyman Kocatepe, Zeki Parlak
Positive test statistic values indicate that the measured value is higher than the expected value. The hypothesis test results tell us whether there is a statistically significant difference between the mean rank of the two groups. But it does not give information about the effect or size of this difference. The effect size (r) is used for this information. It gives us an objective measure of the significance of the effect [38]. The effect size is a statistical value that should be considered in the interpretation of the results. Calculating and interpreting the effect size values in the hypothesis tests increases the intelligibility of the results. In case of a statistically significant difference in the hypothesis test, the effect size is also checked for reporting. The effect size [39] is found with the formula r = Z / √n, where n = number of samples (participants) and Z = standard test statistics value in the SPSS analysis output. So, the effect size found for KFAK1 was:
Responsiveness, minimal detectable change, and minimal clinically important difference of the sitting balance scale and function in sitting test in people with stroke
Published in Physiotherapy Theory and Practice, 2022
Jehad Alzyoud, Ann Medley, Mary Thompson, Linda Csiza
Cohen’s criteria define a large effect size as 0.8 (Cohen, 1988). The effect size and standardized response mean values in this study (1.11 to 2.29) exceeded this value. Therefore, we interpreted the responsiveness using Sawilowsky’s expanded criteria (Sawilowsky, 2009). The FIST demonstrated large to very large effect size (1.11) and very large to huge standardized response mean (1.49). Gorman, Harro, Platko, and Greenwald (2014) found the FIST less responsive (large to very large effect size (0.83) and standardized response mean (1.04)) in their sample. Given that the FIST was designed at a stroke-specific sitting balance measure, these values suggest that the FIST is more sensitive to change when used exclusively in people with stroke as in our study. Furthermore, the application of Sawilowsky’s criteria makes a clear distinction between the SBS and the FIST in our study. Internal responsiveness of the SBS revealed in our study is very large to huge using the effect size method and using the standardized response mean method. These results imply that the SBS is more responsive than the FIST when used in people with stroke receiving rehabilitation in skilled nursing facilities.