Explore chapters and articles related to this topic
Re-examination of Traditional Statistics
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
To remediate this problem, researchers are encouraged to conduct power analyses in order to determine proper sample sizes. Usually the power level is set to .8 whereas the alpha level is fixed at .05. While the directionality (one tailed vs. two tailed) is driven by research questions and hypotheses, the key determining factor is the effect size. Ideally speaking, effect sizes should be derived from prior research (Wilkinson and Task Force 1999), but in practice most researchers simply adopt the conventional values (small, medium, and large) suggested by Jacob Cohen. It is important to point out that Cohen defined .40 as the medium effect size, because it was close to the average observed effect size based on his review of literature found in Journal of Abnormal and Social Psychology during the 1960s. The so-called small, medium, and large effect sizes are specific to a particular domain (abnormal and social psychology) and thus they should not be treated as a universal guideline (Aguinis and Harden 2009). Because different subject matters might have different effect sizes, Welkowitz et al. (1982) explicitly stated that one should not use conventional values if one can specify the effect size that is appropriate to the specific problem.
Let’s Find Out
Published in S. Kanimozhi Suguna, M. Dhivya, Sara Paiva, Artificial Intelligence (AI), 2021
Jayden Khakurel, Indu Manimaran, Jari Porras
Quantitative data collected from the two sessions were analyzed using the statistical data analysis language R and the descriptive statistical analysis functions available in R core (R Core Team 2017) and the psych library (Revelle 2017). We first used the Mann–Whitney U test (Wohlin et al. 2012) to analyze the difference in distributions between the data sets. A continuity correction was enabled to compensate for non-continuous variables (Bergmann and Ludbrook 2000). The Bonferroni correction was used to adjust the p-value to compensate for the family-wise error rate in multiple comparisons (Abdi 2007). We calculated the effect size r using the guidelines by Tofan et al. (2016) for the Mann–Whitney U test. We evaluated the effect size as proposed by Cohen (1994): in r, a large effect is 0.5, a medium effect is 0.3, and a small effect is 0.1.
AN APPLICATION OF META-ANALYSIS TO STUDIES OF RULE VIOLATION
Published in Paul T. McCabe, Contemporary Ergonomics 2004, 2018
Gary Munley, Joyce Lindsay, Elaine Ridsdale
Meta-analysis was developed as a method of research synthesis in the 1970s in order to extract meaningful conclusions in research areas where individual primary studies appeared to produce contradictory or inconsistent findings (Lipsey & Wilson, 2001). It provides a method for quantitative statistical review of a body of literature in order to identify the importance or size of the relationship between any two variables. The primary metric of meta-analysis is the effect size statistic. Effect size statistics represent the strength and direction of the relationship between any two variables of interest. e.g. frequency of rule violation and gender. This report describes an attempt to derive effect size statistics for each of the VPCs suggested by Williams in the extension to HEART.
Effective interventions and features for coronary heart disease: a meta-analysis
Published in Behaviour & Information Technology, 2023
Eunice Agyei, Jouko Miettunen, Harri Oinas-Kukkonen
The mean and standard deviation (SD) of changes in 11 clinical outcomes were used for the meta-analysis. The meta-analysis was conducted to pool the effect size of the intervention using the random effect model. The random effects model takes the variations between-study into account when calculating the effect size. Between-study heterogeneity was assessed using Tau2, H2, and I2. Then subgroup analysis and meta-regression were used to find out possible sources of heterogeneity. To assess the presence of publication bias, ‘Begg's funnel plot’ and Egger's test were used. Statistical analyses were conducted using SPSS 28.0.1.0 (142). Subgroup analyses were used to examine the association between persuasive systems design principles and the effect sizes pooled by the interventions. Effect sizes (measured by Cohen's d) (Cohen 1998) were classified as very small (0.01–0.19), small (0.20–0.49), and medium (0.50–0.79) based on the boundaries defined by (Sawilowsky 2009).
Improving IS Practical Significance through Effect Size Measures
Published in Journal of Computer Information Systems, 2022
Nik Thompson, Xuequn Wang, Richard Baskerville
Effect size is one useful approach to supplement null hypothesis significance testing and address these limitations.15 First, the effect size directly shows the magnitude of certain effects. For example, based upon Cohen’s16 guidelines, researchers can know whether certain results are “negligible” (around .20), “moderate” (around .50), or “important” (around .80) by using Cohen’s d. Therefore, effect size can convey the practical significance of the results. Second, effect size is independent of sample size and scale-free.21 While significance can be directly influenced by the investigators’ setting of N, effect size is not influenced in this way. Note that we do not propose that null hypothesis significance testing should be abandoned. Instead, we suggest supplementing null hypothesis significance testing with effect size data to show the practical significance of results.
Does anodal tDCS improve basketball performance? A randomized controlled trial
Published in European Journal of Sport Science, 2022
Jitka Veldema, Arne Engelhardt, Petra Jansen
The data gathered was analysed using the SPSS version 25 (International Business Machines Corporation Systems). Independent sample t-tests compared both conditions at the baseline. Repeated measures ANOVAs with factors “intervention” (1 mA tDCS, sham tDCS) and “time” (pre, post) were applied to evaluate the effects of 1 mA tDCS on both shooting accuracy and ball-dribbling. Additional, repeated measure ANOVA with factors “intervention” (1 mA tDCS, sham tDCS), “time” (pre, post) and position (P1, P2, P3, P4, P5) evaluated shooting-position dependent effects of 1 mA tDCS in shooting accuracy test. A p-value of ≤0.05 was considered statistically significant. Effect sizes were calculated using an effect size calculator. For interpretation, the Cohen definition of effect size was used (d = 0.2 “small”, d = 0.5 “medium”, d = 0.8 “large”) (Campbell, Machin, & Walters, 2007). A post-hoc power analysis for the sample size used, the effect sizes detected, and the alpha-error probability of 0.05 were calculated using G*Power 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007).