Explore chapters and articles related to this topic
The Profile of Emotional Designs: A Tool for the Measurement of Affective and Cognitive Responses to In-Vehicle Innovations
Published in Michael A. Regan, Tim Horberry, Alan Stevens, Driver Acceptance of New Technology, 2018
Robert Edmunds, Lisa Dorn, Lee Skrypchuk
The responses across the three innovations were subjected to principal components analysis to identify the constructs underlying each of the four scales (Technology Acceptance, Moderating Factors, Affective Appraisal and Emotional Valence). The number of factors to extract was determined by considering the parallel analysis of 1,000 random correlation matrices using the program written by O’Connor (2000), scree plot and Eigen one rule (Factors with an Eigenvalue ≥ 1 are accepted as salient). Principal axis analysis was used to extract the relevant number of factors, and these were submitted to oblique rotation using a quartimin procedure (Direct Oblmin) to achieve simple structure. Item loadings greater than 0.30 were regarded as important for interpreting the factors so as to retain as many items as possible at the early stage of the PED development. The final instrument will accept higher loadings of 0.40 or 0.50, so as to reduce the number of items for each of the factors.
Dimension Reduction Breaking the Curse of Dimensionality
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
In this example, the sample size is large and therefore the component structure tends to be stable. However, if the sample size is smaller and there are many variables, parallel analysis (Horn 1965) is recommended for verifying the result of PCA. Indeed, numerous studies have confirmed that by far parallel analysis is the most accurate method for extracting components or factors (Buja and Eyubuglu 1992; Glorfeld 1995; Humphreys and Montanelli 1975; Zwick and Velicer 1986). The logic of parallel analysis resembles that of resampling, which had been discussed in Chapter 7. In parallel analysis, the number of components extracted should have eigenvalues greater than those in a random matrix. To be more specific, the algorithm generates a set of random data correlation matrices by bootstrapping the data set (resampling with replacement), and then the average eigenvalues and the 95th percentile eigenvalues are computed. Next, the observed eigenvalues are compared against the re-sampled eigenvalues, and only components with observed eigenvalues greater than those from the resampling are retained. The resampled result functions as an empirical sampling distribution, against which the observed is compared. The rationale of using the 95th percentile of the resampled data eigenvalues is that this is analogous to setting the value of alpha to .05 in hypothesis testing (Cho et al. 2009). The scree plot in Figure 8.4 shows an example. A scree plot is a line plot that shows the eigenvalues on the y-axis against the number of factors or principal components on the x-axis. In this example, the line with diamond points shows the eigenvalue associated with each component yielded by PCA, whereas the lines with squares and triangles result from parallel analysis. This author suggests that only two or three components should be retained because they are above the random results. Parallel analysis can be run in SAS, SPSS, Matlab, or R using the programs developed by O’Connor (2000). These programs can be downloaded from http://si/oconnor-psych.ok.ubc.ca/nfactors/nfactors.html.
The development, validation and use of an interprofessional project management questionnaire in engineering education
Published in European Journal of Engineering Education, 2023
Roland Tormey, Marc Laperrouza
A number of different procedures are recommended in literature for extracting the optimal number of factors from a dataset including Kaiser’s or Joliffe’s criterion for eigenvalues or a qualitative review of a scree plot (Field, Miles, and Field 2012, 762). Horn’s parallel analysis method has been found to be among the most accurate methods, especially since sample size can impact considerably on the reliability of other methods (Warne and Larsen 2014). Parallel analysis indicated four factors as being most appropriate. Since the intended theoretical model was a five-factor model, and since the emergent four factors largely aligned with this five-factor model (albeit with two of the five factors combined into one factor), this suggested that the four-factor model was worthy of exploration.
The Finnish Version of the Affinity for Technology Interaction (ATI) Scale: Psychometric Properties and an Examination of Gender Differences
Published in International Journal of Human–Computer Interaction, 2023
Ville Heilala, Riitta Kelly, Mirka Saarela, Päivikki Jääskelä, Tommi Kärkkäinen
The first step in factor analysis is to assess the dimensionality of the data and decide how many factors to retain. A suggested approach is to use multiple methods to assess the dimensionality of the data and compare their results (Lubbe, 2019). To assess the dimensionality and the number of factors to retain, we used parallel analysis (PA) and minimum average partials (MAP). The parallel analysis compares the structure in the collected data to a structure of randomly sampled data. The number of dimensions in the actual data exceeding the number of dimensions on the random data is retained. PA is often referred to as one of the most accurate and robust rules for determining the dimensionality of the data (Lubbe, 2019), and it performs well in a wide variety of scenarios (e.g., Golino et al., 2020). PA with PCA extraction (PA-PCA, a.k.a., Horn’s PA (Horn, 1965)) using polychoric correlation has been suggested to be suitable for all types of data (Garrido et al., 2013). For PA-PCA, we used a non-parametric version of parallel analysis with column permutation (500 random data sets), polychoric correlation, and quantile thresholds 50% (median, PA-PCA-m) and 95% (PA-PCA-95) (Auerswald & Moshagen, 2019; Buja & Eyuboglu, 1992).
Selfish or Utilitarian Automated Vehicles? Deontological Evaluation and Public Acceptance
Published in International Journal of Human–Computer Interaction, 2021
We examined the dimensional structure of the rating items for the four psychological factors in the exploratory factor analysis. We used parallel analysis (Horn, 1965) to determine the number of factors to extract. The “fa.parallel” function of the R package “psych” was used, which found four latent factors; thus, we used this number in the exploratory factor analysis. The Kaiser–Meyer–Olkin (KMO) factor adequacy index (KMO = .92) and Bartlett’s sphericity tests (χ2 = 7671.94, p < .001) supported the suitability of the data for exploratory factor analysis. Second, to determine the correct percentage of explained common variance in the item scores, we used a minimum rank exploratory factor analysis (Shapiro & Ten Berge, 2002). Except for the three perceived benefit items, the other items had low cross-loadings on other factors (< .40). Considering that the cross-loadings of the three perceived benefit items on another factor (i.e., behavioral intention) were equal or very close to the cutoff value of .40, we decided to keep these three items. As shown in Table 3, Factor I was interpreted as behavioral intention (Cronbach’s α = .77); Factor II as deontological evaluation (α = .83); Factor III as perceived benefit (α = .74); and Factor IV as perceived risk (α = .84). All α values were greater than .70, supporting internal consistency reliability (Kankanhalli et al., 2005). The four factors cumulatively explained 74% of the variance and were significantly correlated with each other (see Table 4).