Explore chapters and articles related to this topic
Multiple regression
Published in Andrew Metcalfe, David Green, Tony Greenfield, Mahayaudin Mansor, Andrew Smith, Jonathan Tuke, Statistics in Engineering, 2019
Andrew Metcalfe, David Green, Tony Greenfield, Mahayaudin Mansor, Andrew Smith, Jonathan Tuke
to indicate that the n-dimensional random vector Y has the n-dimensional multivariate normal distribution with mean vector μ and variance-covariance matrix Σ. The pdf of the multivariate normal distribution is: f(Y)=(2π)−n/2|Σ|−1/2e−i2(Y−μ)′Σ−1(Y−μ),
The Probabilistic Structure of Time Series
Published in Tucker S. McElroy, Dimitris N. Politis, Time Series, 2019
Tucker S. McElroy, Dimitris N. Politis
Concept 2.1. Random Vector A random vector is a vector whose components are random variables. The mean vector is defined as the vector of means of each random variable. The covariance matrix is given by (2.1.1). Theoretical Results: Propositions 2.1.4 and 2.1.5.Applications: simulation (Remark 2.1.9), decorrelation (Remark 2.1.15), Gaussian conditional expectation (Fact 2.1.14).Exercises 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.10, 2.11.
Probability, Random Variables, and Stochastic Processes
Published in Erchin Serpedin, Thomas Chen, Dinesh Rajan, Mathematical Foundations for SIGNAL PROCESSING, COMMUNICATIONS, AND NETWORKING, 2012
where x = [x1, x2,… xn]T and μX = [μX1μX1…μXN]T is the mean of the different random variables and RX is the covariance matrix with ith row and jth column element given by Cov(Xi,Xj). Gaussian random vectors are frequently used in several signal processing applications. For instance, when estimating a vector parameter in the presence of additive noise. The reasons for the popularity of these Gaussian vector models are: i) by central limit theorem, the noise density is well approximated as a Gaussian, ii) several closed form analytical results can be derived using the Gaussian model, and iii) the results derived using a Gaussian approximation serves as a bound for the true performance.
Quasidifferentiabilities of the expectation functions of random quasidifferentiable functions
Published in Optimization, 2020
Sida Lin, Ming Huang, Zunquan Xia, Dan Li
The expectation of an random variable is defined by the integral is well defined if it does not happen that both and are , where . Z is called P-integrable if the expectation is well defined and finite, i.e. . The expectation of a random vector is defined componentwise. The expectation of a multifunction is defined as the set of all points of the form , where G is a integrable selection of , i.e. for a.e. , G is measurable and is finite. If the multifunction is convex valued, i.e. the set is convex for a.e. , then is a convex set.
Online robust parameter design considering observable noise factors
Published in Engineering Optimization, 2021
Shijuan Yang, Jianjun Wang, Yan Ma
The DR approach, first introduced by Myers and Carter (1973) and revitalized by Vining and Myers (1990), establishes two separate models for the process mean and the process variance. Then, based on these two models, an optimization strategy is constructed to make the process mean close to the target while minimizing performance variability, so as to achieve the robustness of the process (Ouyang et al.2018; Ozdemir and Cho 2019). This article assumes that variables are a random vector with a mean vector of and a variance–covariance matrix of . The mean and variance models of the response can be obtained by taking the conditional expectation and variance of and in Equation (1): where is an unbiased estimator of . However, is not an unbiased estimator of . This article adjusts the variance model according to the method adopted by Miró-Quesada and Del Castillo (2004): where ,, and are OLS estimators; and and are estimators of the mean vector and variance–covariance matrix of , respectively.