Explore chapters and articles related to this topic
Component Reliability Analysis
Published in Mohammad Modarres, Mark P. Kaminskiy, Vasiliy Krivtsov, Reliability Engineering and Risk Analysis, 2016
Mohammad Modarres, Mark P. Kaminskiy, Vasiliy Krivtsov
In the framework of Bayesian linear regression analysis, the prior information can be added to one or to several regression parameters. Let us begin with including prior information about a single regression parameter, say β1. It is supposed that the information can be expressed as the normal distribution with known mean β1pr and variance σβ1pr2 [39]; that is, β1pr~N(β1,σβ1pr2).
A Review on the Different Regression Analysis in Supervised Learning
Published in K Hemachandran, Shubham Tayal, Preetha Mary George, Parveen Singla, Utku Kose, Bayesian Reasoning and Gaussian Processes for Machine Learning Applications, 2022
K Sudhaman, Mahesh Akuthota, Sandip Kumar Chaurasiya
The Bayesian method is a strategy to delineate and reckon statistical models. Bayesian regression is used when the data set has insufficient data and this method of regression is very useful when the given data is poorly distributed. When we train the machine learning model with the data set. As opposed to standard regression methods, the output of a Bayesian regression model is derived from a probability distribution, while the output of a regular regression model is merely obtained from one value of each attribute. A typical distribution is used to produce the output y (where the variance and mean have been standardized). Bayesian linear regression aims to find the model parameters’ “posterior” distribution, rather than the model parameters themselves. The model parameters, as well as the output y, are expected to be a product of a distribution. The posterior expression is: Posterior=(Likelihood*Proir)NormalizationPosterior: The P (B|y, X) conditional provides the responses as well as the predictive functions; the distributions among model variables are calculated depending upon certain information-driven likelihoods prior expert knowledge and belief. As n→∞, the model parameters B converges to ordinary least squares linear regression.Prior:P(B); we guess the model parameters over the predictor features based on expert knowledge and noninformative priors.Likelihood:P (y, X|B); the probability overcomes the probability distributions whenever the amount of test data increases.
Ethnic groups chaos game optimization algorithm for optimal design of steel structures considering seismic risk
Published in Engineering Optimization, 2023
Pouya Motamedi, Mehdi Banazadeh, Siamak Talatahari
Bayesian linear regression uses Bayesian inference for statistical analysis (Box and Tiao 1973). For a probabilistic model using standard linear regression for a data set , the general format of the model is: where is the output or response of the model, called the regressand, are regression coefficients designated as the model parameters, and are independent variables, called regressors. Here, is a random variable with a mean equal to zero, and a normal distribution exposes the model error. Bayesian linear regression obtains the probability distribution function of and the probability function of the standard deviation of .
Quantifying post-disaster business recovery through Bayesian methods
Published in Structure and Infrastructure Engineering, 2021
Mohammad Aghababaei, Maria Koliou, Maria Watson, Yu Xiao
One of the advantages of the proposed approach to develop Bayesian linear regression models is that it determines the most significant factors in the model prediction. By comparing the initial and upgraded model forms in Equations (4) and (6), it can be concluded that the 13 eliminated explanatory functions were not significant enough to be included in the model form for the prediction/quantification of the cease operation days recovery measure. Three terms remained in the upgraded model form, i.e., intercept, summation of damage to the building, content, and machinery, as well as the summation of different utility disruptions, which indicates that the business cease operation days is governed mainly by the physical damage to the business and infrastructure systems of the community.
Comparison of Performance of Data Imputation Methods for Numeric Dataset
Published in Applied Artificial Intelligence, 2019
Anil Jadhav, Dhanya Pramod, Krishnan Ramanathan
MICE is increasingly popular method for doing multiple imputations (Patric and White 2011; Sterne et al. 2009; van Buuren and Groothuis-Oudshoorn 2011; White, Royston, and Wood 2011). Therefore, we have used MICE package to analyze performance of multiple imputation methods which includes: (a) Predictive Mean Matching (PMM): PMM imputes missing values of a continuous variable “z” such that imputed values are sampled only from the observed values of “z” by matching predicted values as closely as possible. (b) Bayesian Linear Regression: Imputes univariate missing data using Bayesian linear regression analysis. (c) Linear Regression (non-Bayesian): This creates imputation using spread around the fitted linear regression line of “y” given “x” as fitted on the observed data by ignoring model error. (d) Sample: This method takes a simple random sample from the observed data, and imputes these into missing cells. The mathematical details of how these methods works is described by White, Royston, and Wood (2011).