Explore chapters and articles related to this topic
Assessing Psychometric Scale Properties of Patient Safety Culture
Published in Patrick Waterson, Patient Safety Culture, 2018
Jeanette Jackson, Theresa Kline
This chapter provides an overview of the psychometric properties of the HSOPSC using both classical test theory (CTT) and the modern approach, often referred to as Item Response Theory (IRT). To enhance the understanding and importance of IRT, the basic principles will first be introduced. In particular, three fundamental outcomes of the IRT approach will be highlighted: (1) item characteristic curves, (2) measurement information, and (3) invariance. Moreover, this chapter will present data that have been previously analysed and published, using the classical approaches of exploratory and confirmatory factor analysis by Waterson and colleagues (2010) in order to contrast and discuss IRT results with findings based on the factor analytical approach. Finally, practical implications will be highlighted to encourage future healthcare research and healthcare service evaluations to apply and implement IRT findings when measuring patient safety culture.
Research Methods and Statistics
Published in Monica Martinussen, David R. Hunter, Aviation Psychology and Human Factors, 2017
Monica Martinussen, David R. Hunter
In psychology, we usually assume that a person's test score consists of two components that together constitute what is called the observed score. One component is the person's true score, while the other is the error term. This can be expressed as follows: observed score = true score + error part. If the same person is tested several times under the same conditions, we would expect a similar, but not identical, test score every time. The observed scores will vary slightly from time to time. If the error term is small, then the variations will be smaller than those if the error term is large. In addition, we make the assumption that the error is unsystematic—for example, that the error does not depend on a person's true score. This model of measurement is called classical test theory (Magnusson 2003) and is the starting point for many of the psychological tests used today. It is possible to have more advanced models for test scores and the error component (e.g., assuming that not all the error is random but that part of it is systematic).
Measurement Models for Psychological Attributes
Published in Technometrics, 2022
Chapter 1, “Measurement in the Social, Behavioral, and Health Sciences,” overviews the main concepts of psychological attributes, such as intelligence, personality traits, and methodological principles of psychometric measurements. Relation of the models to one or more attribute scales are discussed, for example, the item-response theory (IRT) models mostly imply an ordinal scale of the individual locations, or latent class models (LCMs) represent unordered categories. Measurement tools in behavioral studies are called questionnaires, or inventories, and in achievement assessments they are called tests. These tools are usually presented in a set of problems, statements, or questions, known as items, and the respondents answer by the items, producing data for judging about the attributes. These psychological attributes can be abilities (e.g., arithmetic, or verbal), knowledge (history, or vocabulary), skills (typing, or attention), attitudes (to democracy, or parents), and personality traits (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism, together known as the Big Five ones). Dichotomous and polytomous scores are typical in social, behavioral, and health measurement models. Classical test theory (CTT) is mostly focused on the repeatability and reliability of test performance. One-factor analysis and unidimensional IRT models assume one unobservable or latent variable that can be checked by the results of the model fit. Measurement models include parameters of the people’s performance and parameters of the item properties (e.g., items difficulty), and a higher score on an item corresponds to a higher position on a scale of the attribute. Addition of item scores yields a test score, or a sum score, commonly used as the standard Z-scores and IQ-scores. Principal component analysis (PCA) and factor analysis (FA) combined with CTT are most frequently used techniques to building scores for attributes scaling, and the reliability coefficient is the main CTT characteristic of a test repeatability reported in the social, behavioral, and health sciences. The next most frequently applied models for scaling are IRT, with measuring people by the sums of scores in nonparametric, and by the latent-variable scores in parametric IRT, respectively. The LCM employs the pattern of scores on all the items in the test to classify people in few nominal or ordinal classes, and within a class the people are not ordered. Causes of messy data, LCM in scaling for transitive reasoning, and cycle of instrument construction processes are discussed, including identification of the attribute and its theory, operationalization of the attribute, quantification of item response, psychometric analysis, and feedback to the theory.