Explore chapters and articles related to this topic
Performance Ratings
Published in Herman Koren, Alma Mary Anderson, Management and Supervisory Practices for Environmental Professionals, 2021
Herman Koren, Alma Mary Anderson
The “Fundamental Management Information” section sets forth information on performance evaluation. Everybody wants a report card. To most individuals, the performance rating is equivalent to a report card. It tells them whether or not they are doing good work and whether the good work will result in some type of reward. The supervisor has an extremely important role in this performance rating area, since he or she is close to the employee and best knows what the employee can and cannot do. The supervisor is the one who initially rates the employee and is the one who will have to train the individual when deficiencies have been noted. The performance rating, in effect, is a tool which is used not only to measure performance, but also to help improve the entire organization.
Principles of Assessing Non-Technical Skills
Published in Matthew J. W. Thomas, Training and Assessing Non-Technical Skills, 2017
While a set of behavioural markers is used to define and describe specific aspects of each domain of non-technical skills, actual assessments are usually made using some form of rating scale built into the system. Performance ratings are the most common form of assessment tool used in many settings and have a long history of use in the measurement of job performance.10 Performance rating in its most basic form involves a rater evaluating an individual’s or team’s performance against some pre-determined criteria. It does not rely solely on objective metrics, but rather, involves a degree of judgement on the part of the rater.
What do subjective workload scales really measure? Operational and representational solutions to divergence of workload measures
Published in Theoretical Issues in Ergonomics Science, 2020
Gerald Matthews, Joost De Winter, P. A. Hancock
There are also issues specific to the design of many extant subjective workload scales. For example, administration of the NASA-TLX scale (Hart and Staveland 1988) requires the provision to respondents of rating scale descriptions. This procedure recognizes, but does not necessarily solve, the problem of variation in interpretations of ‘workload.’ De Winter (2014) has pointed out persistent issues with the NASA-TLX whose impact on validity has not been either fully aired or fully resolved as of the present time. These include, but are not limited to, the nature of the response scales, whether or not the raw ratings are weighted, and how people interpret the specific ‘own performance’ rating element which uses a different and inverted response scale to all of the others. Another difficulty is that the task demand commonly varies on a moment-to-moment basis requiring the person estimates workload for what is often conceived as a unitary ‘task.’ Thus, they need to integrate multiple memories of their recent experience; sometimes this can be done fairly accurately, at other times not. Of course, this integration is also a constructive process that can be disproportionately influenced by workload peaks or deviations from expected workload (and see Hancock 2017; Jansen et al. 2016). It further argues that validity of workload measures may be superior in individuals with better episodic memory, and for tasks which contain relatively few stimuli, relative to those for which many events happen during any particular epoch of interest.
Associations among workload dimensions, performance, and situational characteristics: a meta-analytic review of the Task Load Index
Published in Behaviour & Information Technology, 2022
Each subscale is measured with a single item. Hence, the entire instrument consists of six items and is, therefore, easy to administer. The items are rated by marking a response between the endpoints ‘Low’ and ‘High’, with the exception that the performance subscale has the endpoints ‘Good’ and ‘Poor’ (note that a numerically higher performance rating indicates poorer performance). Often, the composite TLX score is simply the average of the six item ratings. This way of calculating the composite score is known as raw TLX (Hart 2006). To help interpret TLX values, Hertzum (2021) provides reference values for the subscales and for raw TLX.