Explore chapters and articles related to this topic
Exploratory Factor Analysis
Published in Douglas D. Gunzler, Adam T. Perzynski, Adam C. Carle, Structural Equation Modeling for Health and Medicine, 2021
Douglas D. Gunzler, Adam T. Perzynski, Adam C. Carle
There is sampling error involved in analyzing eigenvalues; sampling variability may produce eigenvalues greater than one even if all eigenvalues of the population correlation matrix are exactly one and no large components exist [6]. A simulation based analysis termed parallel analysis1 can be performed to help account for this sampling error [6,7]. In parallel analysis, eigenvalues are computed from a random data set with the same numbers of indicators and observations as the original data set. This process is repeated to compute eigenvalues across a number of simulated random data sets and record the average eigenvalues. Since eigenvalues are relatively robust from data set to data set, the number of random data sets can be small (e.g. 50). When the average eigenvalues from the random data are smaller than the reported eigenvalues for the EFA, then the factors should be retained. When the eigenvalues from the random data are larger than the reported eigenvalues for the EFA, then the factors can be attributed to random noise. MPlus can provide parallel analysis for continuous data. MPlus does not provide parallel analysis for ordered-categorical data as of yet after exploration suggested poor performance with tetrachoric and polychoric correlations [8].
Patient-Reported Outcomes: Development and Validation
Published in Demissie Alemayehu, Joseph C. Cappelleri, Birol Emir, Kelly H. Zou, Statistical Topics in Health Economics and Outcomes Research, 2017
Joseph C. Cappelleri, Andrew G. Bushmakin, Jose Ma. J. Alvir
Parallel analysis is another, more objective way to determine the number of factors to retain (Hayton et al., 2004; O’Connor, 2000). This method allows for the identification of factors that are beyond chance. Parallel analysis can be described as a series of three steps. The first step involves the generation of a random dataset with the same number of observations and variables as the real data being analyzed. Another part of this step is to randomly populate this dataset with values representing all possible response values of each item from the real dataset. In the second step, the newly generated random dataset is analyzed in order to extract the eigenvalues, with all eigenvalues saved for every simulation.
Multidimensional Test Linking
Published in Steven P. Reise, Dennis A. Revicki, Handbook of Item Response Theory Modeling, 2014
For this illustration, the data were examined using both exploratory and confirmatory multidimensional factor structures (as well as a unidimensional structure). With respect to the exploratory structure, an analysis of the dimensional structure of the mathematics data was conducted using a combination of parallel analysis (Horn, 1965) and a vector approach developed by Reckase, Martineau, and Kim (2000. Parallel analysis is an extension of principal components analysis that compares the magnitude of principal components in the empirical data to randomly generated data with the same number of variables. The vector approach, on the other hand, considers changes in the angles between item vectors—characterized by factor loadings or MIRT slopes—as the number of modeled dimensions increases. Parallel analysis has been shown to perform well when the underlying factors in the data are orthogonal; however, the number of dimensions may be underidentified if the factors are correlated. In this case, the vector approach may more accurately identify the number of underlying factors. The results from the parallel analysis and vector approach were consistent and suggest the presence of three to four underlying factors in each grade-level test (two of which are major factors); however, a comparison of model fit suggests that the data are better characterized by two dimensions. The exploratory models were each applied using two factors.
Focusing Narrowly on Model Fit in Factor Analysis Can Mask Construct Heterogeneity and Model Misspecification: Applied Demonstrations across Sample and Assessment Types
Published in Journal of Personality Assessment, 2023
Kasey Stanton, Ashley L. Watts, Holly F. Levin-Aspenson, Ryan W. Carpenter, Noah N. Emery, Mark Zimmerman
Although a single-factor CFA model was viable, we conducted a follow-up EFA to show that distinct dimensions could be identified in these data. Prior studies have not examined the factor structure and multidimensionality of the nine specific PD ratings used here to our knowledge. Therefore, we conducted a parallel analysis to help inform how many dimensions should be extracted in these data. Parallel analysis indicated that up to two factors could be extracted (sample eigenvalues = 2.82, 1.11, 1.02; random data eigenvalues = 1.10, 1.07, 1.04). Consistent with this, two interpretable dimensions emerged when extracting two factors, as shown in the middle columns of Table 1. Items assessing a negative, unclear self-image (e.g., unstable sense of self) and needing validation from others (e.g., fearing abandonment) loaded strongly on Factor I, which we labeled Identity Disturbance. Items assessing mistrust of others (e.g., suspicious of being exploited) loaded strongly onto Factor II, which we labeled Suspiciousness (correlation between Factor I and II = .65).
Development and Validation of the Barriers to Care Scale: Assessing Access to Care among Canadian Armed Forces Health Care Providers
Published in Military Behavioral Health, 2022
Christine Frank, Jennifer Born
First, a preliminary EFA was conducted on the 52 items using Maximum Likelihood allowing the number of factors extracted to be based on Eigenvalues >1. Parallel analysis was used to determine the number of factors to retain. As discussed in phase I, some barrier items were related to several domains and thus it was expected that factors would be non-orthogonal, and therefore Oblimin rotation (delta 0) was used. The reliability of the scale was assessed with Cronbach’s Alpha. Alphas above .70 were considered acceptable (Blank & Altman, 1997). In order to be retained in the EFA, items had to meet two criteria: (a) they had to sufficiently load onto a factor (a loading > .32), and (b) they had to increase the overall alpha of their factor (i.e., increase the reliability of the subscale; Tabachnick & Fidell, 2007).
Application of the FACE-Q rhinoplasty module in a mixed reconstructive and corrective rhinoplasty population in Finland
Published in Journal of Plastic Surgery and Hand Surgery, 2021
S. Pauliina Homsy, Mikko M. Uimonen, Andrew J. Lindford, Jussi P. Repo, Patrik A. Lassus
In the Appearance-Related Psychosocial Distress scale, the scores were focused towards the upper end of the scale. No floor or ceiling effects were confirmed. Parallel analysis proposed one factor to be included into factor analysis. Loading values of all items were high whereas communality values of items g and h (0.40 and 0.31, accordingly) proposed other underlying factors accounting for variance in these items addressing avoidance of others and interest in doing things (Table 2). Cronbach’s alpha 0.92 indicated high internal consistency of the scale (Table 3). There was no significant change in the mean baseline and repeated administration scores and ICC (0.89), SEM (7.39) and R (0.00) values supported excellent reliability of the scale (Figure 1). Appearance-Related Psychosocial Distress scores were in weak correlation with the Depression, Distress and Vitality dimension of 15D. In addition, there was a moderate correlation with the item ‘How normal do you think your nose is?’ (Table 4).