Explore chapters and articles related to this topic
Computer-Aided Diagnosis of Prostate Magnetic Resonance Imaging
Published in Ayman El-Baz, Gyan Pareek, Jasjit S. Suri, Prostate Cancer Imaging, 2018
Valentina Giannini, Simone Mazzetti, Filippo Russo, Daniele Regge
Per-patient and per-lesion assessments were compared using McNemar test; other comparisons were performed with Mann-Whitney test. The AUROC was generated for each reader using the confidence scores and pathology results. Statistically significance was set at p ≤ 0.05. Inter-observer agreement between reviewers was evaluated using Fleiss Kappa statistics on StatsToDo© [https://www.statstodo.com/CohenKappa_Pgm.php]. All other tests were conducted using MedCalc version 15.6.1.
Inter- and Intra-rater Reliability of the Head Control Scale: Brief Report
Published in Developmental Neurorehabilitation, 2022
Helene M. Dumas, Elaine L. Rosen, Damara Viray, Colleen Sutherland, Morgan Seifert, Pengsheng Ni
The ICC is preferred to Kappa to assess agreement for the HCS overall score as it is a continuous scale. Levels of agreement for the ICCs were set as follows: >0.75 Good to Excellent, 0.5–0.75 Moderate to Good, 0.25–0.5 Fair, and 0.00– 0.24 Little or None.11 Fleiss’ kappa (κ) is a chance-corrected measure of agreement used when there are more than 2 raters. The kappa statistic was preferred to the ICC for the discrete ordinal HCS position scores and was interpreted as follows: 1.00–0.81 Almost perfect agreement; 0.80–0.61 Substantial agreement; 0.60–0.4 Moderate agreement; 0.40–0.21 Fair agreement; 0.20–0.00 Slight agreement; and <0 Poor or No agreement.12 In addition, we conducted an analysis of the precision of the estimated ICC by calculating the half-width of 95% confidence interval based on the number of rater observations, number of raters, and the estimated ICCs to assure sufficient power for this sample.13 The analyses were conducted using “magree” macro in SAS (SAS Institute, Cary, North Carolina) and “IRR” in the R statistical programming environment (R foundation for Statistical Computing, Vienna, Austria).
Hospital discharge register data on non-ST-elevation and ST-elevation myocardial infarction in Finland; terminology and statistical issues on validity and agreement to avoid misinterpretation
Published in Scandinavian Cardiovascular Journal, 2020
Second, what is critically important is agreement (precision, repeatability, reliability) which is conceptually different with validity (accuracy) and consequently our methodological and statistical approach to assess agreement should be different. Agreement indicates to refinement in a measurement, calculation, or specification, especially as represented by the number of digits given. For validity, a global average approach is usually considered; however, regarding agreement, our approach should be individual based. Applying Cohen’s kappa coefficient is not appropriate to assess agreement. The reason is Cohen’s kappa coefficient depends on the prevalence in each category. It is possible to have the prevalence of concordant cells equal to 90% and discordant cells to 10%; however, get different kappa coefficient value [0.44 as moderate vs. 0.81 as very good], respectively (Table 1). Cohen’s kappa coefficient value also depends on the number of categories [2,6,7]. I should mention that applying the weighted kappa would be a good choice to assess intra-rater agreement. However, Fleiss kappa is suggested to assess inter-rater agreement [8]. They concluded that the division of MI diagnoses to STEMI and NSTEMI is not reliable in the Finnish HDR. Such conclusion can be a misleading message due to inappropriate use of statistical tests to assess validity and agreement. To make it brief, any conclusion on validity and agreement should take into account the above-mentioned methodological issues. Otherwise, misinterpretation may occur.
The development and implementation of the Nottingham early cognitive and listening links (Early CaLL); A framework designed to support expectation counselling and to monitor the progress, post cochlear implantation, of deaf children with severe (SLD) and profound and multiple learning difficulties (PMLD) and associated complex needs
Published in Cochlear Implants International, 2020
Gill Datta, Karen Durbin, Amanda Odell, Jayne Ramirez-Inscoe, Tracey Twomey
The materials were initially trialled in-house in 2011–2012, with inter-observer reliability studies overseen by the team’s clinical psychologist. Five professionals from the team, with backgrounds in auditory-verbal therapy, speech and language therapy or deaf education, used a range of written reports, alongside video records of therapy sessions of four children, to evaluate their progress against the Framework statements, at four different time points across the first five years post cochlear implantation. Using Fleiss’ Kappa, a 79.19% overall agreement was identified between the raters. The Framework was then piloted by the team for 24 months, with an on-going review process, in consultation with families and local professionals, after which revisions were made to improve the clarity of the wording and to strengthen aspects of the strands, particularly those which focussed on Social Engagement and Alternative and Augmentative Communication.