Explore chapters and articles related to this topic
Sliced Inverse Regression
Published in Bing Li, Sufficient Dimension Reduction, 2018
Sliced Inverse Regression (SIR), introduced by Li (1991), is the first and most commonly known Sufficient Dimension Reduction estimator. The term “inverse regression” refers to the conditional expectation E(X|Y). The word “inverse” is used because, in usual regression analysis, what is of interest is the conditional mean E(Y|X). The word “slice” refers to the fact that we estimate the conditional mean E(X|Y) by taking an interval of Y.
Efficient Integration of Sufficient Dimension Reduction and Prediction in Discriminant Analysis
Published in Technometrics, 2019
Perhaps the biggest class of SDR methods are nonparametric estimators based on the first two conditional moments E(X∣Y) and cov(X∣Y). This includes SIR (Sliced Inverse Regression; Li 1991), SAVE (Sliced Averaged Variance Estimation; Cook and Weisberg 1991), DR (Directional Regression; Li and Wang 2007), among others (e.g., Gannoun and Saracco 2003; Ye and Weiss 2003; Zhu, Ohtaki, and Li 2007; Zhu, Zhu, and Feng 2010; Cook and Zhang 2014). There have been studies revealing the equivalences between SIR and the Fisher’s linear discriminant analysis (LDA) and that between SAVE and the quadratic discriminant analysis (QDA) (Schott 1993; Cook and Yin 2001; Pardoe, Yin, and Cook 2007, e.g.). In particular, under the assumption that the conditional distribution of X∣Y is multivariate normal, the subspace found by SIR is the same as the subspace spanned by LDA directions. This means that the LDA classification based on the SIR-reduced predictor is exactly the same as that based on the original predictor. Similarly, coupling QDA with SAVE cannot improve classification, because plugging in the SAVE-reduced predictor to the classification by Mahalanobis distance leads to exactly the QDA classification with all predictors.
Aggregate Inverse Mean Estimation for Sufficient Dimension Reduction
Published in Technometrics, 2021
The subplot (c) demonstrates the idea behind a popular SDR method, sliced inverse regression (SIR). The response surface was divided into two parts, Y > 0 versus Y < 0, and a linear separating hyperplane was identified where A scatterplot between Y and the reduced variable was shown in the subplot (d), where the points are corresponding to the points in subplot (c) with the same symbols (and colors). Clearly, the summary plot (d) not only provides a much better view of the data structure, but also greatly facilitates the subsequent statistical modeling and inference.