Explore chapters and articles related to this topic
Introduction to Bayesian Inference
Published in Virgilio Gómez-Rubio, Bayesian Inference with INLA, 2020
This illustrates Bayesian learning quite well. First of all, the posterior mean is a compromise between the prior mean μ0 and the mean of the observed data . When the number of observations is large, then the posterior mean is close to that of the data. If n is small, more weight is given to our prior belief about the mean. Similarly, the posterior precision is a function of the prior precision and the likelihood precision and it tends to infinity with n, which means that the variance of μ tends to zero as the number of data increases.
Bayesian Learning for EEG Analysis
Published in Chang S. Nam, Anton Nijholt, Fabien Lotte, Brain–Computer Interfaces Handbook, 2018
In this chapter, we presented a tutorial that introduced how to implement Bayesian learning, including BLDA and SBL in EEG feature optimization and classification for BCI applications. Preceded by describing the relationships among discriminant analysis, LSR, maximum likelihood estimation, and regularization, the algorithmic principles and features of BLDA and SBL are detailedly explained in the aspects of prior designing, posterior inference, and hyperparameter estimation. Two examples were provided to explicitly explain the usage of BLDA and SBL for EEG analysis on two data sets recorded from ERP and SMR BCI paradigms, respectively. Extensive experimental comparisons were carried out among the Bayesian learning algorithms and other state-of-the-art methods. The experimental results confirmed the superiority of Bayesian learning on discriminative feature extraction and robust classifier calibration for improving BCI performance. A discussion was provided on the more widespread applications of Bayesian learning in brain signal analysis and some potential extensions for exploring more complex but important information from EEG for further performance improvement of BCI systems.
The Phase I–II Paradigm
Published in Ying Yuan, Hoang Q. Nguyen, Peter F. Thall, Bayesian Designs for Phase I–II Clinical Trials, 2017
Ying Yuan, Hoang Q. Nguyen, Peter F. Thall
The process whereby the expanding dataset is used by applying Bayes’ Theorem repeatedly to turn into is an example of “sequential Bayesian learning.” It is a formalism for learning about the model parameter vector θ and making decisions on that basis repeatedly as additional data are observed during the trial. It relies on the fact that, if at each stage the posterior is used as the prior for the next stage with the new likelihood (Yn+1|ρ[n+1], θ), these n + 1 iterations of Bayes’ Law give the same posterior as if it was applied once with likelihood (n+1|θ) and prior . This can be seen by expanding the nth posterior and using the fact that the successive observations are independent :
Bayesian modelling of nonlinear Poisson regression with artificial neural networks
Published in Journal of Applied Statistics, 2020
Hansapani Rodrigo, Chris Tsokos
The training of an ANN can be done using either the maximum likelihood or the Bayesian methods. Bayesian neural networks provide a more intuitive approach for network training. A significant amount of research in this area has been conducted by David Mackay in 1992 [12–14]. In the ML method, we find a single set of weight parameters by minimising the error function. In contrast to that, in the Bayesian approach, a probability distribution is used to capture the uncertainties associated with the weight parameters [2]. Use of Bayesian learning in ANN provides several advantages over the ML method. It allows using a relatively large number of regularisation parameters while optimising them during the training process. These regularisation parameters have a natural interpretation in the Bayesian setting. Moreover, the ARD prior [14,15,19] helps to identify the relative importance of each covariate. The improved prediction accuracies can be obtained by creating network committees, i.e. by combining several ANN models. Error bars can be used to visualise the variations associated with the predictions.
Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints
Published in Journal of Applied Statistics, 2018
Adam L. Smith, Sofía S. Villar
The majority of the response-adaptive randomisation methods proposed in the literature use Bayesian learning and a binary endpoint, with information on the effectiveness of the treatments gained throughout the trial deployed immediately, to increase the chances of patients in the trial receiving a better performing treatment (see e.g. [22]). A limitation of these approaches is that they are myopic (they only make use of past information to alter treatment allocation probabilities) and hence they are not influenced at all by the number of patients that remain to be treated in the trial (nor by the expected number of patients outside the trial). An approach recently proposed and modified for addressing this limitation and developing ‘forward looking algorithms’ is to consider clinical trial design within the framework of the Multi-Armed Bandit Problem (MABP). The optimal solution to the classic MABP has been known since the 1970s [8], and those responsible for its solution saw clinical trials as the ‘chief practical motivation’ for their work [9, p. 561]; despite this, it has never been applied to a real life clinical trial. Villar et al. [23, pp. 2–3]
Are college campuses superspreaders? A data-driven modeling study
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2021
Hannah Lu, Cortney Weintz, Joseph Pace, Dhiraj Indana, Kevin Linka, Ellen Kuhl
After the summer break, most colleges had implemented a mandatory 14-day quarantine period after move-in date, followed by an aggressive weekly, or even twice per week, surveillance testing program to minimize the spread of COVID-19 (The Chronicle of Higher Education 2020). To increase transparency, many institutions have shared their test results on public COVID-19 dashboards, most of them updated weekly, some even daily (The Chronicle Crisis Initiative 2020). Despite best efforts, the reported data are sparse, noisy, fluctuating, and often inconsistent. Interpreting the data with a purely machine-learning based approach would likely result in ill-posed problems and overfitting (Alber et al. 2019). To constrain the parameter space, we propose a data-driven modeling approach in which we combine a classical mathematical epidemiology model with Bayesian learning (Linka, Peirlinck, Sahli Costabal, Kuhl 2020). Specifically, we use a susceptible-exposed-infectious-recovered compartment model and learn the dynamics of the effective reproduction number for 30 college campuses from the daily case reports using Bayesian inference (Linka, Peirlinck, Kuhl 2020). Figure 1 illustrates the 30 institutions of our analysis and their reported total case numbers since the beginning of the outbreak ranging from 5,806 at the Ohio State University to 141 at Carnegie Mellon University (New York Times 2020). From the learnt reproduction dynamics, we identify trends in campus-wide outbreak dynamics, discuss the effects of online, hybrid, and in person instruction, and make direct comparisons with the case data of each institution’s home county. Our objective is to identify universal features of a campus outbreak, learn patters of infection and reproduction, perform direct comparisons with each institution’s home county, and make informed recommendations about campus reopening after the winter break.