Explore chapters and articles related to this topic
Work Ability Index as a Tool of Assessment of the Possibilities to Perform Work
Published in Joanna Bugajska, Teresa Makowiec-Dąbrowska, Tomasz Kostka, Individual and Occupational Determinants, 2020
Teresa Makowiec-Dąbrowska, Joanna Bugajska
In the research conducted in Thailand among 2.744 participants, the Cronbach alpha value obtained for WAI was 0.66 (Kaewboonchoo and Ratanasiripong 2015). Three factors were determined based on factor analysis. The first factor was related to self-rating of current work ability compared with the lifetime best and work ability in relation to the physical and mental demands of the job; the second one was related to self-rating of four elements, namely number of current diseases diagnosed by a physician, estimated work impairment due to disease, sick leave during the past year and own prognosis of work ability 2 years from now; the third factor referred to the rating of own prognosis of work ability 2 years from now and three questions assessing the worker’s mental resources.
A Survey of Chemometric Techniques for Exploratory Data Analysis and Pattern Recognition
Published in Iain Thornton, Hazel Doyle, Ann Moir, Geochemistry and Health, 2017
Since each measured parameter adds a dimension to the data representation, measurement of 30 variables requires the ability to depict relationships in a 30-dimensional space. This is well beyond the two or three dimensions which humans conceptualise comfortably. It is also beyond the graphical representation capabilities commonly used. Factor analysis is one of the pattern recognition techniques that uses all of the measured variables (features) to examine the interrelationships in the data. It accomplishes dimension reduction by minimising minor variations so that major variations may be summarised. Thus, the maximum information from the original variables is included in a few derived variables or factors. Once the dimensionality of the problem has been reduced it is possible to depict the data in a few selected two or three dimensional plots. We shall see how these plots highlight the significant features of the underlying data structure.
Overview on Structural Equation Modeling
Published in Sergey V. Samoilenko, Kweku-Muata Osei-Bryson, Creating Theoretical Research Frameworks Using Multiple Methods, 2017
Sergey V. Samoilenko, Kweku-Muata Osei-Bryson
There are two types of FA—exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The basic difference between the two approaches is their purpose. EFA aims to explore whether the number of items (variables) in the data set could be reduced to a smaller number of meaningful factors—latent constructs. In this sense, EFA is a data analytic tool allowing for discovering of common themes in the data set. While EFA is not performed with the preconceived structure to be discovered in mind, an investigator may interpret the extracted factors in the light of a theory or a framework. CFA, on the other hand, is driven by the goal of testing a hypothesis of the existence of relationships between the items and the factors that is in accordance with the established framework or a theory.
High-rise apartment quality evaluation and related demographic factors: lesson from RentSafeTO programme
Published in Building Research & Information, 2023
Factor analysis is an interdependence statistical technique designed to determine the number and nature of latent variables or factors that explain the variation and covariance in a set of practical measures (Brown, 2015). There are two types of common factor analysis: confirmatory factor analysis and explanatory factor analysis (EFA). EFA is used when researchers have no clear or relatively complete expectations of the relationship structures (Rogers, 2022). There are several commonly used methods for quantifying EFA factors: the Kaiser–Guttman rule, the scree test and the parallel analysis (Fabrigar et al., 1999). Since no method is fail-safe, it seems reasonable to apply multiple criteria. Fabrigar et al. (1999) recommended the combination of the scree test and parallel analysis to identify the appropriate number of factors. If the common factors are extracted via the maximum likelihood method, the number of potential variables can be determined more naturally using model test results and additional fitting indicators(Fabrigar et al., 1999; Schulze et al., 2015).
A semantic similarity analysis of Internet of Things
Published in Enterprise Information Systems, 2018
Chun Kit Ng, Chun Ho Wu, Kai Leung Yung, Wai Hung Ip, Tommy Cheung
Factor analysis is a well-known statistical measure to examine interrelationships between an enormous set of variables such as authors, journals or articles. This measure divides large set of variables into smaller group of factors and explains maximum amount of observations with minimum number of explanatory factors (Field 2013). There are two main types of factor analysis, which called confirmatory factor analysis (CFA) and exploratory factor analysis (EFA). Generally, CFA expects that the number of factors is confirmed before performing the analysis. In contrast, EFA does not have this assumption. In this study, ‘factor’ is the labelling of interrelated groups of variables performing data reduction and summarization among similar articles. Every factor is comprised of influential papers which are highly co-cited by other papers within a particular field (McCain 1990) and factor can also be treated as subfield in an academic area. Different factors provide the foundation of the subfields and they portray the intellectual core of an academic area. Therefore, the technique, factor analysis, used in this study is EFA and it is commonly used in document analysis such as co-citation analysis (Leydesdorff and Vaughan 2006).
Alcohol and substance misuse in the construction industry
Published in International Journal of Occupational Safety and Ergonomics, 2021
Joseph Flannery, Saheed O. Ajayi, Adekunle S. Oyegoke
When using factor analysis, factor loadings reflect how strongly each specific factor affects the variable, ranging between −1 and 1; the closer a loading is to 1, the stronger the factor. An eigenvalue is a number showing the variance in the data, where the highest value in the group indicates the most popular response and is deemed the most popular factor. The percentage of variance explains the contribution of the group to the model. The results from the factor analysis showed that the seven contributing factors accounted for 74.83% of the total variance, and the three mitigating strategies for 72.899%. This is deemed the per cent of variance attributable to each group.