Explore chapters and articles related to this topic
Problems with the Error
Published in Julian J. Faraway, Linear Models with Python, 2021
M-estimation is related to weighted least squares. The normal equations tell us that: XT(y−Xβ^)=0
Robust fitting of a precise planar network to unstable control points using M-estimation with a modified Huber function
Published in Journal of Spatial Science, 2018
Edward Osada, Andrzej Borkowski, Krzysztof Sośnica, Grzegorz Kurpiński, Marcin Oleksy, Mateusz Seta
Although GNSS positioning has recently become a dominating observational technique in geodesy and surveying, the classic measurements of precise planar networks are still being employed for different high-accuracy surveying purposes. The adjustment of measured distances, angles and directions in precise planar networks is typically performed by applying the classic least squares adjustment method (LS). LS minimizes the sum of weighted squared residuals assuming the normal distribution of all observation errors, which may not always be true when using real measurements. The classic adjustment may become erroneous when measurements include any gross errors (outliers). A similar effect of the adjustment can appear if coordinates of control points are erroneous. Utilizing such control points as fitting points for an established network can cause a global deformation of the network. Therefore, the robust methods are typically applied, as they minimize the impact of outliers on estimated parameters. Huber (1981) developed the so-called M-estimation method, which is a variant of the maximum likelihood estimation. The robust M-estimation is based on an iterative process with a modification of the weights in the LS estimation. In this process the weights of observations which do not fulfil the threshold criteria are modified in each iteration. The M-estimation is widely applied to solve geodetic and surveying robust adjustment problems, because of its straightforward interpretation and simplicity. Examples of applications can be found in many publications, e.g. Koch (1996), Wicki (1999) and Valero and Moreno (2005).
Robust AFT-based monitoring procedures for reliability data
Published in Quality Technology & Quantitative Management, 2020
Shervin Asadzadeh, Arash Baghaei
Also, the values of quality characteristic for the second stage are prone to outliers. For adjusting the effect of outlier data on the regression model, a robust regression method has been implemented. Robust regression is a technique to decrease the detrimental effect of outlier observations. No robust regression can be accepted as the best method in general. Different methods show different strengths and weaknesses with regard to the percentage of outliers and the position of explanatory and passive variables. The differences among the existing methods are related to three features entitled the breakdown point, efficiency and the bounded influence. M-estimators are the most common method among other robust regression techniques and are preferable when there are only outliers available in the response observations. If the data set is expressed visually on the graph, outlying points will be far away from the other values. Thus, the easiest way is to draw the scatter plot of the data to investigate if there are outliers in response observations. Moreover, determining the boundaries by finding the interquartile range (IQR) and multiplying it by 1.5 and then adding and subtracting it to upper quartile and lower quartile, can be considered as another useful method to check whether we have outlying observations in input or output quality variables. This should be done before implementing M-estimator for parameter estimation to be sure that only response observations are prone to outliers. Otherwise, if there are outliers in both variable, compound estimators should be applied which is not discussed in this paper. M-estimation tries to find the minimum of the function ρ of the residuals (r) and modify the influential observations by applying proper weights (W) to each observation (Maronna, Martin, & Yohai, 2006). The two common M-estimators’ functions are the Huber and the Tukey bisquare, which diminish the effect of the outliers on the regression coefficients. Their objective and weight functions are given in Table 1.
Productivity enhancement strategies in North American automotive industry
Published in International Journal of Production Research, 2018
Amir Abolhassani, James Harner, Majid Jaridi, Bhaskaran Gopalakrishnan
There are different methods of robust regression such as, M-estimators, MM-estimator, and SMDM estimators that are performed to enhance the obtained statistical model.M-Estimation: The class of M-estimator models includes all models that are derived to be maximum likelihood models (Alma 2011). M-estimators are useful when the environment is noisy, and possible outliers are probable (Kutner et al. 2005). The most widely used M-estimators are Huber and Tukey’s bisquare weight functions.MM-Estimation: Yohai (1987) developed MM estimation which is a special type of M-estimation. MM-estimation is a combination of efficient estimation, S estimation – the generalisation of least median squares that is based on estimators of scale, and high breakdown value estimation. It was the first estimate with a high efficiency under normal error and high breakdown point (Alma 2011). MM estimation procedure uses three steps such as using S estimation to estimate the regression parameter which minimises the scale of the residual from M-estimation and proceeds with M-estimation of the regression parameters based on the proper redescending score function (Susanti and Pratiwi 2014).SMDM-estimates: Koller and Stahel (2011) stated that there might be three main issues while using MM-estimate method as,The S-scale estimate might be biasedLoss of efficiency of the estimated parametersThe levels of tests may not be at the desired valueThe scholars proposed two additional steps as a remedy and extended the standard MM-estimate. After the MM-estimation step, the design adaptive estimate will be calculated. Then, the regression parameters are reestimated and used as an initial estimate in MM-estimate. (Koller and Stahel 2011; Yohai 1987).Principal component regression (PCR): PCR is a traditional multivariate method which is used to estimate the response variable based on selected principal components (PCs) of the predictors (Filzmoser 2001). PCR is used for two reasons: removing the potential multicollinearity and reducing the dimensionality which both could be the interest of this research. Here, principal component analysis (PCA) was used to find linear combinations of the explanatory variables, without considering the categorical variables such as ownership and car segment, which can be used to summarise the data without losing too much information in the process (Maitra and Yan 2008).