Explore chapters and articles related to this topic
Beyond Gut Feelings…
Published in Douglas A. Wiegmann, Scott A. Shappell, A Human Error Approach to Aviation Accident Analysis, 2017
Douglas A. Wiegmann, Scott A. Shappell
To control for this, a more conservative statistical measure of inter-rater reliability, known as Cohen’s Kappa, is typically used (Primavara et al., 1996). Cohen’s Kappa measures the level of agreement between raters in excess of the agreement that would have been obtained simply by chance. The value of the kappa coefficient ranges from one, if there is perfect agreement, to zero, if all agreements occurred by chance alone. In general, a Kappa value of 0.60 to 0.74 is considered “good” with values in excess of 0.75 viewed as “excellent” levels of agreement (Fleiss, 1981). At a minimum then, the goal of any classification system should be 0.60 or better.
Cognitive Task Analysis Methods
Published in Neville A. Stanton, Paul M. Salmon, Laura A. Rafferty, Guy H. Walker, Chris Baber, Daniel P. Jenkins, Human Factors Methods, 2017
Neville A. Stanton, Paul M. Salmon, Laura A. Rafferty, Guy H. Walker, Chris Baber, Daniel P. Jenkins
Once the coding scheme has been finalised, it should be applied to all the transcripts. One or more codes may be assigned to each segment, again based on the needs of the analysis. Once all transcripts have been coded, a randomly selected section (typically around 10 per cent of the whole data set) should be passed to at least one other analyst for encoding. Traditionally, Cohen’s Kappa is calculated to statistically assess inter-rater reliability. If the reliability is not sufficient, then the coding scheme may require amendment.
Will the use of different prevalence rates influence the development of a primary prevention programme for low-back problems?
Published in Thomas Reilly, Julie Greeves, Advances in Sport, Leisure and Ergonomics, 2003
E. Zlnzen, D. Caboor, M. Verlinden, E. Cattrysse, W. Duquet, P. Van Roy, Jan Pieter Clarys
According to Main and Waddell (1991), a Cohen’s Kappa >0.60 means a substantial concordance between the variables, and 0.41 – 0.59 can be considered as an average concordance while 0.21 – 0.40 are an indication of a moderate concordance. Kappa <0.21 indicates no concordance between the test – retest values. Referring to the results of the questionnaire used in the present study, most of the questions showed an average to substantial concordance.
Exploring the potential of blockchain-enabled lean automation in supply chain management: a systematic literature review, classification taxonomy, and future research agenda
Published in Production Planning & Control, 2023
Aaron Jackson, Virginia L. M. Spiegler, Kathy Kotiadis
Following this eligibility criteria procedure, two researchers thoroughly analysed the full-text of the 202 papers. The purpose of this was three fold; (i) to measure the degree of inter-rater agreement between the authors; (ii) to determine which papers should be included for analysis; and (iii) reducing potential bias in the paper selection process (Thomé, Scavarda, and Scavarda 2016). To measure the degree of agreement, we applied the Cohen’s Kappa coefficient (as suggested by Durach, Kembro, and Wieland 2017). The statistics for Cohen’s Kappa vary from 0 to 1. If the evaluation is 1, it suggests the researchers are in complete agreement and that agreement was not achieved by chance. If the evaluation is 0, there is no agreement amongst the researchers. The Cohen’s Kappa value undertaken for the quality evaluation procedure was 0.9, which indicates an almost perfect agreement (Pérez et al. 2020). So, we decided to maintain all 202 papers for analysis.
EEG-based emotion recognition using dual tree complex wavelet transform and random subspace ensemble classifier
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2022
Emrah Hancer, Abdulhamit Subasi
where Recall and Precision are respectively defined as follows. Kappa: Cohen’s Kappa considers both the number of agreements (TP & TN) and the number of disagreements between the raters (FP & FN), and so it can be defined as the metric used to measure the performance of classification models based on assessing the perfect agreement and agreement by chance between the two raters (real-world observer and the classification model). The criterion of how a model performs well over a random classifier based on class frequencies defined as follows.
A weighted pattern matching approach for classification of imbalanced data with a fireworks-based algorithm for feature selection
Published in Connection Science, 2019
Cohen’s Kappa rate (Ben David, 2008a, 2008b) is an alternative measure to classification accuracy since it compensates for random hits. It is the measure of the actual hits that can be attributed to the classifier and not by mere chance. Cohen’s Kappa rate measures the reliability of a classification model. An indication of low or negative Kappa value indicates that the classifier fails to classify imbalanced data (Li, Fong, & Zhuang, 2015). Cohen’s Kappa statistic ranges between –1 and 1. A Kappa value “–1” indicates total disagreement, “0” indicates random classification and “1” indicates total agreement. The Cohen’s Kappa rate may be obtained from the confusion matrix using Equation (13). where xii denotes the count of instances along the main diagonal of the confusion matrix, K is the total number of instances, Q is the number of classes, x.i and xi. are the column and row total counts, respectively.