Explore chapters and articles related to this topic
The Probabilistic Structure of Time Series
Published in Tucker S. McElroy, Dimitris N. Politis, Time Series, 2019
Tucker S. McElroy, Dimitris N. Politis
Concept 2.5. AutocovarianceAutocovariance is covariance of a time series with itself at various lags. Definition 2.4.1 provides the general case, and Proposition 2.4.18 describes the properties of the lag autocovariance, which pertains to weakly stationary processes. A random vector obtained as a finite sub-span of a weakly stationary process has a Toeplitz covariance matrix (2.4.3). Examples 2.5.1, 2.5.2, 2.5.5, 2.5.6.Figure 2.5.Exercises 2.30, 2.31, 2.32, 2.33, 2.34, 2.36, 2.37, 2.38.
Correlation and Stationary Time Series
Published in Robert H. Shumway, David S. Stoffer, Time Series: A Data Analysis Approach Using R, 2019
Robert H. Shumway, David S. Stoffer
Note that γx(s, t) = γx(t, s) for all time points s and t. The autocovariance measures the linear dependence between two points on the same series observed at different times. Recall from classical statistics that if γx (s, t) = 0, then xs and xt are not linearly related, but there still may be some dependence structure between them. If, however, xs and xt are bivariate normal, γx (s, t) = 0 ensures their independence. It is clear that, for s = t, the autocovariance reduces to the (assumed finite) variance, because () γx(t,t)=E[(xt−μt)2]=var(xt).
Variability – implications for estimating radiation
Published in Lucien Wald, Fundamentals of Solar Radiation, 2021
There are many mathematical tools dedicated to the analysis of the temporal and spatial variability of meteorological and other variables. They generally apply to fluctuations around the mean. The best known are likely the autocovariance function and its equivalent in spectral analysis: the power spectral density, calculated using the Fourier transform.
Adaptive Reweighted Variance Estimation for Monte Carlo Eigenvalue Simulations
Published in Nuclear Science and Engineering, 2020
Typically, a random process in the time domain also has a spectral representation in the frequency domain via a Fourier-Stieltjes integral. A time domain representation describes how a process changes over time whereas a frequency domain representation shows how components lie within each given frequency band. Representation in the time and frequency domains provides properties that are difficult to uncover in other representations. The autocovariances , are the properties in the time domain that show the linear intercycle relationships over time. The autocovariance function can be converted into spectral density in the frequency domain via the discrete Fourier transform:
An integer-valued bilinear time series model via two random operators
Published in Mathematical and Computer Modelling of Dynamical Systems, 2019
M. Mohammadpour, Hassan S. Bakouch, S. Ramzani
The autocovariance function is one of the important tools used in analysis of time series in the time domain, while for the frequency domain the spectral density function is used. Both functions are given below. Theorem 2.1. The autocovariance function of the MPTBL(1,0,1,1) process is obtained as: and Proof. After some algebraic calculations, we obtain and From the stationarity of the process, we get and for
Vertical spatial correlation length based on standard penetration tests
Published in Marine Georesources & Geotechnology, 2019
Emir Ahmet Oguz, Nejan Huvaj, D. V. Griffiths
Autocorrelation coefficient is defined as:where the Ni and are the real measurement and trend (mean) at depth i, respectively, and Ni+k is the measurement at depth i+k. The autocorrelation coefficient is constrained by [−1.0, 1.0]. If the coefficient is positive, both variables tend to be higher and lower together. However, if the coefficient is negative, high value of one variable tends to be associated with a low value of the other variable (Kottegoda and Rosso 2008). In the literature, the same autocorrelation function is also defined in terms of autocovariance function. The autocovariance of the SPT-N blowcounts may be calculated by the method of moments (Eq. 4) and the autocorrelation coefficients may be calculated by normalizing with the data variation (Eq. 5). It is seen that combining Eqs. 4 and 5 results in Eq. 3.