Explore chapters and articles related to this topic
Let’s Find Out
Published in S. Kanimozhi Suguna, M. Dhivya, Sara Paiva, Artificial Intelligence (AI), 2021
Jayden Khakurel, Indu Manimaran, Jari Porras
Quantitative data collected from the two sessions were analyzed using the statistical data analysis language R and the descriptive statistical analysis functions available in R core (R Core Team 2017) and the psych library (Revelle 2017). We first used the Mann–Whitney U test (Wohlin et al. 2012) to analyze the difference in distributions between the data sets. A continuity correction was enabled to compensate for non-continuous variables (Bergmann and Ludbrook 2000). The Bonferroni correction was used to adjust the p-value to compensate for the family-wise error rate in multiple comparisons (Abdi 2007). We calculated the effect size r using the guidelines by Tofan et al. (2016) for the Mann–Whitney U test. We evaluated the effect size as proposed by Cohen (1994): in r, a large effect is 0.5, a medium effect is 0.3, and a small effect is 0.1.
The Need of External Validation for Metabolomics Predictive Models
Published in Raquel Cumeras, Xavier Correig, Volatile organic compound analysis in biomedical diagnosis applications, 2018
Raquel Rodríguez-Pérez, Marta Padilla, Santiago Marco
Applying the same statistical test to each of the metabolites in a data set would result in a high number of false positives (also known as false discoveries or type I errors). Therefore, to reduce the probability of such error type, methods for multiple testing have been developed. These methods apply hypothesis testing considering the whole set (or a subset) of the measured metabolites, i.e., the whole set of tests or hypotheses. For this, several strategies have been followed, such as controlling the familywise error rate (FWER) or the false discovery rate (FDR) (Benjamini and Hochberg, 1995). FWER is the probability of having at least one false positive among all the tests, whereas FDR is the expected proportion of false positives among all the significant tests. The latter has been especially conceived for data sets with large number of tests and small sample size, as it is the case in genomics. Bonferroni correction (Bland and Altman, 1995) is a method that follows the FWER strategy, while the p-value step-up method proposed by Benjamini and Hochberg (Benjamini and Hochberg, 1995) controls FDR. Moreover, other quantities derived from the mentioned strategies, such as the q-value (Storey, 2003) or the local FDR (LFDR) (Efron et al., 2001), can be used analogously to p-value for multiple comparisons, even for data sets with small number of tests (Bickel, 2013; Padilla and Bickel, 2012).
General Linear Modeling of Magnetoencephalography Data
Published in Hualou Liang, Joseph D. Bronzino, Donald R. Peterson, Biosignal Processing, 2012
Dimitrios Pantazis, Juan Luis Poletti Soto, Richard M. Leahy
Thresholding statistical maps should control some measure of the false-positive rate that takes into account the multiple hypothesis tests. Several measures of false-positives have been proposed, the most popular of which is the familywise error rate (FWER), that is, the probability of making at least one false-positive under the null hypothesis that there is no experimental effect. The Bonferroni method and two approaches based on the maximum statistic distribution, RFT and permutation tests, control the FWER. Another measure that is becoming increasingly popular is FDR, which controls the expected proportion of errors among the rejected hypotheses. Other measures of false-positives exist, such as positive false discovery rate, false discovery rate confidence, and per-family error rate confidence (Nichols and Hayasaka, 2003), but they are not as common and not covered in this chapter.
Bilateral deficit in strength but not rapid force during maximal handgrip contractions
Published in European Journal of Sport Science, 2021
Joshua C. Carr, Michael G. Bemben, Christopher D. Black, Xin Ye, Jason M. Defreitas
All data are presented as means ± standard deviations. The mean bilateral indices (%) were compared against zero with a one-sample t-test for both hands. An independent samples t-test was used to compare the bilateral indices between males and females. The bilateral indices for the dependent variables were of primary interest as the indices are normalized and ignore sex-based differences in absolute force and EMG values. The Shapiro–Wilk test was used to assess normality. If the data was not normally distributed, non-parametric tests (Mann–Whitney and Wilcoxon Signed-Rank) were used. Cohen’s d values and 95% confidence intervals (CI) are reported for mean comparisons. Statistical analyses were performed with SPSS software (version 26.0, IBM SPSS Inc., Chicago, IL, USA). Alpha was set at 0.05. To control for familywise error rate, Bonferroni corrections were applied based on the total number of comparisons. The bilateral index (%) was computed with the following equation for each dependent variable (Howard & Enoka, 1991), positive and negative values reflect a bilateral facilitation and bilateral deficit, respectively:
A two-stage online monitoring procedure for high-dimensional data streams
Published in Journal of Quality Technology, 2019
When testing a single hypothesis, the Type-I error rate is simply the probability of a Type-I error. When testing multiple hypotheses in Eq. [1], there is a Type-I error associated with each . Therefore, there are many ways to define the overall Type-I error rate when testing those multiple hypotheses simultaneously. Besides the FDR, some of the standard Type-I error rates used in the multiple hypothesis testing literature are the following (see, for example, Dudoit et al. 2003):The per-comparison error rate (PCER) is defined as the expected number of Type-I errors divided by the number of hypotheses.The familywise error rate (FWER) is defined as the probability of at least one Type-I error.
A note on Group Selection with multiple quality characteristics: power comparison of two methods
Published in International Journal of Production Research, 2019
W.L. Pearn, Chen-ju Lin, Y.H. Chen, J.Y. Huang
The Modified Bonferroni method (Lin et al. 2017) selects a subset of processes containing the best among k processes, k > 2. The suppliers associated with the selected processes are identified as the best suppliers. Denote as the index of the ith process, i = 1, … , k. The process that has the maximum value among k processes is the best, whereas the other processes are inferior. The actual number of the best process, kB, ranges from 1 to k. Denote as the estimated index of the ith process, i = 1, … , k. Let be the ordered estimated indices, . The Modified Bonferroni method selects process i into the subset if the subtraction statistic Wi = is less than the critical value cα, cα > 0, i = 1, … , k. The method applies Bonferroni adjustment to modify the significance level of (k − 1) tests, so that the familywise error rate (FWER) is controlled below α.