Explore chapters and articles related to this topic
Fundamentals I: Bayes' Theorem, Knowledge Distributions, Prediction
Published in Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson, Bayesian Thinking in Biostatistics, 2021
Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson
Bayesian statistics starts by using (knowledge-based or prior) probabilities to describe current states of knowledge. It then incorporates information through data collection. Updating the knowledge given the data, according to the rules of probability calculus, results in new (posterior) probabilities to describe your state of knowledge after combining the prior information with the data.
Bayesian Analysis
Published in Lyle D. Broemeling, Bayesian Analysis of Infectious Diseases, 2021
The last 20 years is characterized by the rediscovery and development of resampling techniques, where samples, such as Gibbs sampling, are generated from the posterior distribution via MCMC methods. Large samples generated from the posterior make it possible to make statistical inferences and to employ multi-level hierarchical models to solve complex, but practical problems. See Leonard and Hsu [13], Gelman et al. [14], Congdon [15–17], Carlin and Louis [18], Gilks, Richardson, and Spiegelhalter [19], who demonstrate the utility of MCMC techniques in Bayesian statistics.
The Basics of Statistical Tests
Published in Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen, Statistical Reasoning for Surgeons, 2020
Mitchell G. Maltenfort, Camilo Restrepo, Antonia F. Chen
A useful relative of the p-value is the confidence interval (CI, Figure 4.2), which can describe the degree of uncertainty in estimates and the size of the effect [21]. Figure 4.2 does not show concrete numbers because the line of “no change” might be a value of 0 if we were looking at net differences between quantities such as the length of stay or cost, but it might be 1 if we were talking about odds ratios or hazard ratios. A CI can be drawn for any parameter – net difference, odds ratio, hazard ratio, etc. Note that frequentist statistics’ “competitor” Bayesian statistics does not have p-values at all but does use a variation of CI. As we would expect from the law of large numbers, higher sample sizes result in a narrower CI as well as lower p-values, reflecting better estimates up to the point where the improvement in precision becomes vanishingly small. Knowing either CI or p-value, you can calculate the other [22, 23].
Mental health and well-being during COVID-19 lockdown: A survey case report of high-level male and female players of an Italian Serie A football club
Published in Science and Medicine in Football, 2021
Andreas Ivarsson, Alan McCall, Stephen Mutch, Alessia Giuliani, Rita Bassetto, Maurizio Fanchini
Despite the prospective nature of our study, there are some limitations we would like to acknowledge. First, the use of self-report measure to collect data about mental health and well-being is related to several potential limitations. More specifically, self-report measures can still be associated with several biases, including common method bias (Podsakoff et al., 2012), social desirability (van de Mortel, 2008) and weak relationships with ‘real-world’ behaviours (Blanton and Jaccard, 2006).Second, a potential shortcoming with using Bayesian statistics is related to the use of priors (Gelman, 2008). The subjective selection of priors can have a strong impact on the results. Given the lack of previous results on this specific research question we decided to use weak informative priors for all analyses. Finally, because this is a case report based on one football club only, the generalizability of the findings to other teams, is not possible. However, we do provide an insight into the potential for mental-health issues in high-level footballers.
Machine learning in asthma research: moving toward a more integrated approach
Published in Expert Review of Respiratory Medicine, 2021
Sara Fontanella, Alex Cucco, Adnan Custovic
We systematically searched the literature up to 2020 using Scopus database to capture papers relevant to the application of MS, BS and ML and AI in asthma research. Scopus is an online bibliometric database developed by Elsevier that boasts a broad coverage of scientific resources and high accuracy [59]. We built three separate queries to investigate the impact of each discipline in asthma research. The term ‘asthma’ was included in all searches combined with different search terms for the three individual categories: Multivariate statistics (MS): ‘principal component’ OR ‘discriminant analysis’ OR ‘correspondence analysis’ OR ‘canonical correlation’ OR ‘Markov models’ OR ‘factor analysis’ OR ‘structural equation’ OR ‘latent variable’ OR ‘multidimensional scaling’ OR ‘clustering’ OR ‘latent class’ OR ‘cluster analysis’ OR ‘latent profile’ OR ‘profile regression’ OR ‘mixture models’.Machine learning and artificial intelligence (ML&AI): ‘artificial neural networks’ OR ‘deep learning’ OR ‘supervised learning’ OR ‘unsupervised learning’ OR ‘support vector machine’ OR ‘SVM’ OR ‘decision trees’ OR ‘classification trees’ OR ‘regression trees’ OR ‘random forest’ OR ‘machine learning’ OR ‘artificial Intelligence’.Bayesian statistics (BS): ‘Bayesian’ OR ‘Bayes’.
Do Temporal Regularities during Maintenance Benefit Short-term Memory in the Elderly? Inhibition Capacities Matter
Published in Experimental Aging Research, 2020
Lison Fanuel, Sophie Portrat, Simone Dalla Bella, Barbara Tillmann, Gaën Plancher
Statistical analyses were computed using JASP 0.11.1 (JASP Team, 2019) with the default settings. In addition to classical frequentist statistics, Bayesian statistics were computed. The Bayesian factor (BF) associated with an effect is the resulting statistics of the comparison between all the models including this particular effect and all the models not including this effect (Etz & Wagenmakers, 2017). Thus, it reflects the probability of the inclusion of this effect averaged across all candidate models. When applicable, we reported the Bayesian factor associated with an effect (BF), as well as the most probable model (i.e., the model that fits best the data) and its associated Bayesian factor (BFM). Importantly, a Bayesian factor can give evidence toward the alternative hypothesis (H1) or toward the null hypothesis (H0). Following Lee and Wagenmakers (2014), a BF between 3 and 10 is considered as moderate support for the alternative hypothesis and BF above 10 are considered as a strong support for the alternative hypothesis. BF10 values between 1/3 and 1/10 and below 1/10 are considered as moderate support and strong support for null hypothesis, respectively. BF10 values between 1/3 and 3 are considered as ambiguous information (Etz, Gronau, Dablander, Edelsbrunner, & Baribault, 2017; Lee & Wagenmakers, 2014; Wagenmakers, 2007).