Explore chapters and articles related to this topic
Inference on Proportions
Published in Marcello Pagano, Kimberlee Gauvreau, Heather Mattie, Principles of Biostatistics, 2022
Marcello Pagano, Kimberlee Gauvreau, Heather Mattie
When designing a study, investigators often wish to determine the sample size necessary to provide a specified power for the hypothesis test they plan to conduct. Recall that the power of a test is the probability of rejecting the null hypothesis when it is false. When dealing with proportions, power calculations are a little more complex than they were for tests based on means; however, the reasoning is quite similar.
Knowledge Area 2: Teaching and Research
Published in Rekha Wuntakal, Ziena Abdullah, Tony Hollingworth, Get Through MRCOG Part 1, 2020
Rekha Wuntakal, Ziena Abdullah, Tony Hollingworth
With regard to hypothesis testing, which of the following is correct? Null hypothesis specifies a hypothesized real value for a parameterType I error occurs when the null hypothesis is not rejected when it is falseType II error occurs when the null hypothesis is rejected when it is trueThe power of the test is the probability of accepting the null hypothesis when it is falseAn alternative hypothesis specifies a real value for a parameter which will be considered when the null hypothesis is not rejected
Statistics, Research and Governance
Published in Manit Arya, Taimur T. Shah, Jas S. Kalsi, Herman S. Fernando, Iqbal S. Shergill, Asif Muneer, Hashim U. Ahmed, MCQs for the FRCS(Urol) and Postgraduate Urology Examinations, 2020
Hamid Abboudi, Erik Mayer, Justin Vale
A type I error (or false positive error) is said to have occurred when a null hypothesis, which is true, is inappropriately rejected. The probability of making a type I error is equal to the chosen significance level (p-value). Conversely, when a null hypothesis, which is false, fails to be rejected, a type II error (or false negative error) is said to have occurred. The probability of not making a type II error equals the power of the test. Publication bias is not related to hypothesis testing and statistical error.
Multivariate nonparametric methods in two-way balanced designs: performances and limitations in small samples
Published in Journal of Applied Statistics, 2022
Fabrizio Ronchi, Solomon W. Harrar, Luigi Salmaso
Starting from the objective of an experiment, practitioner should define the maximum acceptable type I and type II error rates and plan the experiment accordingly. Therefore, expected power of a test should be taken into account starting from the design phase of the experiment. Powers of common parametric tests have been widely investigated, whereas a not so wide investigation is present in literature for nonparametric tests. Performances of the methods compared in this study have been assessed in recent publications with respect to the nominal α level [2,7,9,11,13,20,25] and to the power of the test [2,4,7,9,13,25]. Nevertheless, to the best of our knowledge, there is no comparison of power of the aforementioned nonparametric tests under the same conditions. In fact, simulation designs in previous studies vary significantly, and a fair comparison between the methods based on existing results is not possible. Furthermore, the possible impact that the level of a non-investigated factor could have on the power of the test on a main factor has not been considered for all the methods in the two-way designs.
Significance test for linear regression: how to test without P-values?
Published in Journal of Applied Statistics, 2021
Paravee Maneejuk, Woraphon Yamaka
Considering the case that the null hypothesis 4–6, there is a small variation in the probability values for all methods. When the sample size is greater than 10, all methods show the evidence supporting the alternative hypothesis. However, in the case of small sample size, say N = 10, there is a number of times that our testing methods lead to misinterpretation. Among 1,000 simulated datasets, we can see that p-value favors p-value and plausibility approaches, except for N = 10. This indicates that the power of any test depends on the sample size. If the sample size is large enough, the test will be more reliable, especially when the null hypothesis
Effects of guided mindfulness meditation on anxiety and stress in a pre-healthcare college student population: a pilot study
Published in Journal of American College Health, 2020
Matthew S. Burgstahler, Mary C. Stenson
The study was a repeated measures experimental design. Statistical Package for the Social Sciences (SPSS) version 24 was used to analyze the data. We measured stress (PSS), anxiety (STAI state and trait), Mindfulness (FFMQ), and heart rate variability (HRV) before and after 8 weeks of a mindfulness meditation intervention in 33 pre-health college students. We use paired samples t tests to examine pre- and post-test differences and computed Cohen’s d for effect size. The effect sizes in this study were considered to be medium to large using Cohen’s criteria.25 Statistical power for each test was calculated using G* Power v.3.1 by entering the effect size for each test, n, and alpha (0.05).26 The results for power (1-B) ranged from .654 (d = .423) to .994 (d = .804). Our sample size of 33 is more than adequate for the main objective of this study.