Explore chapters and articles related to this topic
Meta-Analysis of Diagnostic Tests
Published in Christopher H. Schmid, Theo Stijnen, Ian R. White, Handbook of Meta-Analysis, 2020
Yulun Liu, Xiaoye Ma, Yong Chen, Theo Stijnen, Haitao Chu
Second, in addition to partial verification bias, other potential sources of bias such as publication bias also threaten accurate evaluation of test performance. Simple methods such as funnel plots (Song et al., 2002) and the trim-and-fill method (Duval and Tweedy, 2000) have been used for detection of and correction for publication bias. However, as pointed out in Chapter 13, these are inappropriate diagnostics for detecting publication bias. Moreover, separate funnel plots for sensitivity and specificity are difficult to interpret and test simultaneously. To address publication bias in diagnostic test meta-analysis, further research efforts are needed.
Acute Cholecystitis
Published in Stephen M. Cohn, Peter Rhee, 50 Landmark Papers, 2019
Most clinicians, including emergency room physicians and surgeons, still rely heavily on an ultrasound as the first line to diagnose AC. It is easy to use, easily accessible, and avoids radiation exposure. Nonetheless, according to a meta-analysis study (Kiewiet et al., 2011), the accuracy of the ultrasound is at best 81% sensitivity and 83% specificity. However, this particular study analysis had a significant limitation due to the high heterogeneity of the different studies under review. Clearly, “verification bias” is the most confounding limitation for any study on the accuracy of any particular diagnostic test. A verification bias exists when only patients with positive test results undergo surgery while those with negative test results do not. When the standard reference for diagnosing AC is a pathology report and patients with negative test results do not undergo surgery, the true incidence of false negatives is unknown. Interestingly, when the verification bias was removed, the ultrasound sensitivity dropped to only 38% but specificity remained at 90% (Kulvatunyou et al., 2012). This low accuracy of the ultrasound then begs the question if one should rely on ultrasound to exclude the diagnosis of AC.
Case studies
Published in Louis Cohen, Lawrence Manion, Keith Morrison, Research Methods in Education, 2017
Louis Cohen, Lawrence Manion, Keith Morrison
Shaughnessy et al. (2003, pp. 290–9) suggest that case studies often lack a high degree of control, and treatments are rarely controlled systematically and have little control over extraneous variables. This, they argue, renders it difficult to make inferences and to draw cause and effect conclusions from case studies, and there is potential for bias in some case studies as the researcher might be both participant and observer and may overstate or understate the case (verification bias). Case studies, they argue, may be impressionistic, and self-reporting may be biased (by the participant or the observer). Further, they argue that bias may be a problem if the case study relies on an individual’s (selective) memory.
Diagnostic accuracy of faecal calprotectin in a symptom-based algorithm for early diagnosis of inflammatory bowel disease adjusting for differential verification bias using a Bayesian approach
Published in Scandinavian Journal of Gastroenterology, 2020
Anna Viola, Andrea Fontana, Alessandra Belvedere, Riccardo Scoglio, Giuseppe Costantino, Aldo Sitibondo, Marco Muscianisi, Santi Inferrera, Lucia Maria Bruno, Angela Alibrandi, Gianluca Trifirò, Walter Fries
Since IBD status was established either by invasive colonoscopy (i.e., the gold standard) or clinical follow-up assessments (i.e., the inferior reference standard), these two reference standards defined the disease condition differently because of their different quality (i.e., a reliable diagnosis of IBD was achieved only in patients who received the gold reference standard). Moreover, as the choice of the reference standard was strongly related to the fCAL test result, the assessment of diagnostic and prognostic accuracy of fCAL (the index test) was affected by the so called ‘differential verification bias’ [23]. In presence of a differential verification, the predictive values of the index test would be valid and interpretable only with respect to each reference standard separately.
Systematic Review of the Yield of Temporal Artery Biopsy for Suspected Giant Cell Arteritis
Published in Neuro-Ophthalmology, 2019
Edsel B. Ing, Dan Ni Wang, Abirami Kirubarajan, Etienne Benard-Seguin, Jingyi Ma, James P. Farmer, Michel J. Belliveau, Galina Sholohov, Nurhan Torun
The quality analysis (see Figure 1) was performed from the perspective of the potential for bias in the TAB results. The major criteria for selection bias were non-consecutive TAB in the study group and verification bias. Although articles investigating ultrasound/magnetic resonance imaging (MRI) for GCA may have had little bias with respect to the imaging investigation, TAB may not have been obtained in all patients, or the decision to perform TAB may have been influenced by the result of the imaging study, leading to verification bias. In a large TAB series of unilateral and bilateral biopsies, if only results of the bilateral biopsies were reported this was considered selection bias. If only subjects with high ACR scores underwent TAB, this was considered a possible performance bias. If the pathologist was not blinded to the patient’s symptoms, bloodwork results, or ACR score, this was a possible detection bias. Withdrawals from TAB (e.g. patient refusal to undergo TAB, or a vein or nerve specimen rather than the artery) were considered an attrition bias. If the pathology results from all patients that underwent TAB were not listed, this was considered a reporting bias. The main reason for “other bias” was because TAB series from the same city, author, or institution had partial overlap of patients that we could not completely eliminate.
Out-of-Hospital Research in the Era of Electronic Health Records
Published in Prehospital Emergency Care, 2018
Craig D. Newgard, Rochelle Fu, Susan Malveau, Tom Rea, Denise E. Griffiths, Eileen Bulger, Pat Klotz, Abbie Tirrell, Dana Zive
Regarding probability sampling and missing data, in the context of evaluating diagnostic accuracy, our probability sampling posed the problem of “verification bias” (24) because injury severity was verified for all triage-positive patients but only a subgroup (those sampled) of triage-negative patients. Relying only on the available data would have led to an inflated estimate of sensitivity (24). We started with using a Bayesian method to calculate sensitivity, specificity, and 95% confidence intervals after adjusting for verification bias (24). However, this approach is complex and tedious in stratified probability samples, so we switched to estimating these metrics as weighted proportions using inverse sampling probabilities and statistical procedures developed for complex survey design. These methods allowed for the integration of multiple strata, sampling structure, the corresponding correlations, and MI. This strategy also streamlined analyses for planned subgroups.