Explore chapters and articles related to this topic
Solutions Using Machine Learning for Diabetes
Published in Punit Gupta, Dinesh Kumar Saini, Rohit Verma, Healthcare Solutions Using Machine Learning and Informatics, 2023
Jabar H. Yousif, Kashif Zia, Durgesh Srivastava
Table 3.3 presents the independent variables result with a tolerance value > 0.1 and a variance inflation factor value (VIF) < 10, which shows the regression model has no multicollinearity problem. The multilinear regression tests results are presented in Tables 3.4, 3.5, and 3.6.
Neighbourhood Recovery and Community Wellbeing in Cities Following Natural Disasters
Published in Igor Vojnovic, Amber L. Pearson, Gershim Asiki, Geoffrey DeVerteuil, Adriana Allen, Handbook of Global Urban Health, 2019
Vivienne Ivory, Chris Bowie, Clare Robertson, Amber L. Pearson
We fitted separate linear regression models for each year of interest, where the dependent variable (χ1) was gross domestic product (GDP), measured as the annual sum in NZ$ modelled at each CAU across Christchurch city. Independent variables were tested for multicollinearity by calculating the variance inflation factor (VIF). Residential dwelling and commercial building consents were shown to have a high level of multicollinearity, so these were combined to a single additive variable. All of the final independent variables in the model showed low levels of multicollinearity, according to the VIF.
Potentials and Limits of Phenomenological Models
Published in Tiziana Rancati, Claudio Fiorino, Modelling Radiotherapy Side Effects, 2019
Model instability is enhanced by discreet steps or bifurcations in the model development process, such as variable selection, particularly near critical decisions boundaries where small variations result in very different models. Also, collinearity of the data enhances model instability. Collinearity is known to increase the variance of the model parameters, as described by the ‘variance inflation factor’. But collinearity also impedes variable selection, because it is more difficult to recognize the truly best predictor from a set of similar (correlated) variables than from a group of dissimilar (independent) ones. Instability is partly related to sampling of the data (the composition of the patient cohort) and partly by the choice of modeling method. With small datasets and data-driven analysis the model instability is usually dominated by sampling effects, but the choice of modeling method is rarely negligible.
Intrinsic Relationships between Learning Conceptions, Preferences for Teaching and Approaches to Studying among Occupational Therapy Students in the United States
Published in Occupational Therapy In Health Care, 2023
Tore Bonsaksen, Adele Breen-Franklin
Although all outcome variables deviated from the normal distribution (p < 0.05 for all tests), the variables were only moderately skewed (deep approach = −0.46, strategic approach = −0.55, surface approach = 0.04), and well within the recommended skewness interval (between −2 and 2) (George & Mallery, 2010). Moreover, for all outcomes, the range of the standardized residuals were within the recommended interval (between −3 and 3) (Field, 2013). We visually inspected the scatterplots for all significant associations to verify linear relationships between the variables. Multicollinearity was checked with the variance inflation factor (VIF), and all VIFs were below 1.35, indicating very little collinearity between the independent variables. Auto-correlation (in this case indicating a degree of dependency between observations due to the participants coming from the same cohort and/or education institution) was checked with the Durbin-Watson test, and all test measures were considered within an acceptable range (between 1.77 and 2.13). Homoscedasticity was interpreted from the scatterplots of the predicted values (ZPRED) against the residuals (ZRESID). No patterns were observed, indicating that the regression models functioned equally well across levels of the dependent variables. As a result, we considered the criteria for conducting the linear regression analyses to be fulfilled.
Risk Model Development and Validation in Clinical Oncology: Lessons Learned
Published in Cancer Investigation, 2023
Gary H. Lyman, Pavlos Msaouel, Nicole M. Kuderer
Multicollinearity may result in overfitting of the models performing well on the training data but less on the validation data. It will also lead to less precise estimates of the independent variables with increased standard errors of the coefficients resulting in an increase in type II error and failure to reject the null hypothesis of no effect. While multicollinearity may be suspected if the regression coefficients change substantially when deleting or adding covariates, it is best assessed more formally on the basis of the variance inflation factor (VIF) representing 1/(1-R2). Alternatively, coefficient of determination can be represented as R2 = 1 – (1/VIF). A VIF of 5 equates to an R2 of 0.8 or an absolute correlation of approximately 0.9 between two or more covariates.
Nursing management challenges: Effect of quality of work life on depersonalization
Published in International Journal of Healthcare Management, 2021
P. Yukthamarani Permarupan, Abdullah Al Mamun, Naeem Hayat, Roselina Ahmad Saufi, Naresh Kumar Samy
Partial least square modelling techniques (PLS-SEM) is a prevalent analytical method that works with the latent constructs to have the exploration of the causal effect for the endogenous variables with explorative nature and with non-normal data set. Hair, Ringle, and Sarstedt [38] recommended that the reporting of PLS-SEM should be in two stages. Cronbach’s alpha (α) and composite reliability (CR) are utilized to report the intern consistency for the constructs with the recommended score for α and CR, which are 0.70 or above [39]. The average variance extracted (AVE) value must be 0.50 or above for every construct [38]. The variance inflation factor (VIF) represents the inflation of variance due to the presence of multicollinearity within the study constructs [39].