Explore chapters and articles related to this topic
The Basics of Statistical Tests
Published in Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen, Statistical Reasoning for Surgeons, 2020
Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen
There is an alternative approach to p-values we will discuss here to help clarify how they represent the probability of results arising from random chances given the null hypothesis being true. The permutation test can be computationally expensive and is only useful in simple study designs, but the underlying concept is straightforward. For many iterations, repeat the statistical test after scrambling the order of the outcome, so you are performing repeated tests based on the assumption that the outcome was generated purely randomly. The p-value in this context is the rate at which the randomized iterations outscore the original test. For example, consider comparing the means between two groups. In the permutation test, use the difference between groups as the test statistic. If there is a systematic difference between groups, then few, if any, of the randomized datasets should show a larger difference than the original data and the permutation p-value will be low. If no such systematic difference exists, then the p-value would be much higher. If we are only interested in changes in one direction, say showing that group B minus group A is greater than 0, then we would ask how often the permuted difference in means (which may be positive or negative) is greater than that in the original data. If we want a two-sided test, where we want to show that A and B are different, then we would use the absolute value of the differences but still ask how often the permuted difference is larger in magnitude than the original difference.
Meta-Regression
Published in Christopher H. Schmid, Theo Stijnen, Ian R. White, Handbook of Meta-Analysis, 2020
Julian P.T. Higgins, Jose A. López-López, Ariel M. Aloe
To carry out the test for a particular model covariate, xj, we first obtain the test statistic (such as a Wald statistic, ). Then, for each of the k! possible permutations of the xij values (from studies i = 1,…, k), we refit the model and re-compute the value of the test statistic. By permuting the covariate values, any association found with the effect sizes is expected to be purely a result of chance. Then, the two-sided p-value for the permutation test is twice the proportion of cases in which the test statistic under the permuted data is as extreme or more extreme than under the observed data. In practice, a randomly selected sample of the permutations is typically used. The permutation test has been found to provide an adequate control of the type I error rate in a wide range of scenarios for random-effects meta-regression (Higgins and Thompson, 2004; Viechtbauer et al., 2015). The idea can be extended to sets of covariates (Higgins and Thompson, 2004).
Important Yet Unheeded
Published in Rens van de Schoot, Milica Miočević, Small Sample Size Solutions, 2020
The example of a permutation test described above tests the null hypothesis of exchangeability (see also Chapter 2 by Miočević, Levy, and Savord), which states that the correlation between the two variables can be fully explained by the random sampling (or random assignment) process that generates the data. Exchangeability can be violated, for example, if the data contain clusters, as in multilevel data (more about these later). Permutation tests are typically used to test a null hypothesis. They can be used to establish a confidence interval, but this is more convoluted and computationally intensive. See Garthwaite (1996) for more details on permutation tests.
Predictive value of oxidative stress biomarkers in drug-free patients with bipolar disorder
Published in Nordic Journal of Psychiatry, 2022
Wassim Guidara, Meriam Messedi, Manel Naifar, Nada Charfi, Sahar Grayaa, Mohamed Maalej, Manel Maalej, Fatma Ayadi
The combination of five markers in two type sample by CombiROC increased the AUC to reach 93.2% and 99.8% by combining (PC + AOPP + MDA + Hcys). The combination of the five markers (PC + AOPP + MDA + GSH-Px + Hcys) remarkably increased the AUC to 94.9% and 100%, which can be collectively described as an acceptable diagnostic or discriminating marker (Table 5 and Figure 3). A ten-fold cross validation (CV) provided a reliable estimate on the overall panel performance owing to effective clinical validation of the panel; however, since CV yields over-optimistic results, a permutation test was subsequently carried out. A permutation test, in simple terms reassumes the distribution of a particular data set by resampling the observed data. The results obtained (Table 6) suggested that the panel was acceptable as the overall accuracy, sensitivity and specificity were the least affected by the imposed likelihood (Figure 4(A, B)). In the permutated models, the ‘real’ AUC values were found outside the reference density distribution in both groups, rendering them models with high validity (Figure 4). Alternatively, the Violin plot showed the probability density of the data for the two compared classes (Training sample and controls, Validation sample and controls), dependent on the previously obtained optimal cutoff on the corresponding ROC curve (Combo IV) and a pie chart showed the fractions of false predictions as well as true predictions relative to the two classes are presented in the Supplementary data (Figures 5 and 6).
Multivariate nonparametric methods in two-way balanced designs: performances and limitations in small samples
Published in Journal of Applied Statistics, 2022
Fabrizio Ronchi, Solomon W. Harrar, Luigi Salmaso
In this study we compare performances of some nonparametric and semi-parametric methods that have been developed during recent years. One permutation approach is based on NonParametric Combination (NPC) [20] applied to Synchronized Permutation (SP) tests [2,3]. In general, permutation tests are computationally intensive and distribution free. Indeed, this approach overcomes the shortcomings of MANOVA with the only mild condition of the data set to be coming from non-degenerate probability distributions. Other permutation methods that do not require exchangeability under the null hypothesis are parametric and wild bootstrap methods of Konietschke et al. [13] and nonparametric bootstrap methods of Pauly et al. [9]. The permutations are based on Wald-type statistic (WTS) in Konietschke et al. and multivariate ANOVA-type statistics (MATS) in Pauly et al. From a completely different perspective, Harrar and Bathke [10,11] developed small sample tests based on modified versions of Wilks' Lambda, Lawley-Hotellings and Bartlett-Nanda-Pillai MANOVA test hereinafter referred to as MWL, MLH and MBNP, respectively. The modifications were aimed at making the tests robust against non-normality and unequal covariance matrices. These tests assume a linear additive model and are designed to address the same hypotheses.
A non-parametric statistic for testing conditional heteroscedasticity for unobserved component models
Published in Journal of Applied Statistics, 2021
Alejandro Rodriguez, Gabriel Pino, Rodrigo Herrera
When both error terms are conditionally heteroscedastic, like those in the fourth model are, the power of the statistic proposed by Broto and Ruiz [15] is greater than that of permutation and the nonparametric. However, the size values of the statistic proposed by Broto and Ruiz [15] are clearly different from those of the nominal value; therefore, it is not clear whether the greater power of the former is caused by a real capacity for capturing the heteroscedasticity, or it is just oversized because the test is not capable of mimicking the distribution under the null hypothesis correctly. Meanwhile, as has already been shown, the proposed nonparametric test statistic accurately approximates the null hypothesis distribution. The permutation test statistic has similar power to the proposed nonparametric test statistic when the sample size and larger lag order are considered (Table 6).