Explore chapters and articles related to this topic
Systemic Lupus Erythematosus
Published in Jason Liebowitz, Philip Seo, David Hellmann, Michael Zeide, Clinical Innovation in Rheumatology, 2023
Vaneet K. Sandhu, Neha V. Chiruvolu, Daniel J. Wallace
Despite the 2019 EULAR/ACR criteria heavily emphasizing ANA as the entry criterion for the diagnosis of SLE, a positive ANA can be found in up to 30% of the general population and in other autoimmune conditions such as scleroderma, rheumatoid arthritis, Sjögren’s syndrome, and mixed connective tissue disease. ANA has been heavily criticized for its poor specificity,10 and there is emerging investigation into autoantigen arrays. Proteome microarray-based technology has been utilized for years to identify biomarkers in many diseases. Autoantigen arrays are used to screen and identify interactions between antigens and antibodies on a large scale.11 One of the benefits of this technology is that antibodies can be detected at a level of less than 1 ng/ml. Small samples, close to 1–2 microliters, can be obtained from serum, body fluids, or cell culture supernatant. Antibodies that bind to corresponding antigens on the array are detected using a fluorophore conjugate of second antibodies against different isotypes of autoantibodies (IgG, IgM, IGA, IgE). One of the marvels of autoantibody arrays is their capacity to detect hundreds of thousands of autoantibodies quantitatively and even prior to clinical onset of disease, thereby serving as an early diagnostic tool. Furthermore, quantification of antibodies may be helpful in monitoring disease activity and response to treatment. Data obtained from these arrays have demonstrated greater sensitivity in comparison to enzyme-linked immunosorbent assay (ELISA).12
Preclinical Molecular Imaging Systems
Published in Michael Ljungberg, Handbook of Nuclear Medicine and Molecular Imaging for Physicists, 2022
Phantom imaging for uniformity and quantification should be performed on a regular basis. Quantification is of particular importance to ensure that the system provides stable and quantitative values. A drift in quantification can have detrimental effects on research studies performed over longer periods. A co-registration test between other imaging systems (e.g., CT, MRI) should also be performed regularly. This is especially important if a CT system is used for attenuation correction. This test can be performed with a set of point sources that are visualized on both imaging modalities.
Primary Mitral Regurgitation
Published in Takahiro Shiota, 3D Echocardiography, 2020
However, in clinical practice, significant limitations remain for accurate and reproducible quantification.33 In particular, commonly used 2D-derived MR severity parameters such as vena contracta (VC) width and proximal isovelocity surface area (PISA) based on geometric assumptions (i.e., circular regurgitant orifice [RO] or hemispheric proximal flow region) are not always valid in clinical practice.1,33
Development of circulating microRNA-based biomarkers for medical decision-making: a friendly reminder of what should NOT be done
Published in Critical Reviews in Clinical Laboratory Sciences, 2023
Päivi Lakkisto, Louise Torp Dalgaard, Thalia Belmonte, Sara-Joan Pinto-Sietsma, Yvan Devaux, David de Gonzalo-Calvo
The clinical application of miRNAs requires additional efforts to reduce the technical variability. Improved standardization of quantification is imperative. Unfortunately, we believe that it is not possible to provide a widely accepted standard protocol. Different laboratories use divergent protocols for all the steps of miRNA quantification, from blood collection to data analysis. Nevertheless, there are some crucial and common pitfalls, which unfortunately add variability to assays and overall results, that should be avoided. Here, we have addressed what NOT to do in quantification of circulating miRNAs (Figure 1) and highlighted all aspects that directly impact the reproducibility of miRNA analysis. In particular, we have focused on the gold standard and most widely used technique in the field: RT-qPCR.
Elements of chaplaincy in Danish intensive care units: key-informant interviews with hospital chaplains
Published in Journal of Health Care Chaplaincy, 2022
It has been argued of late, that chaplaincy research is needed to improve practice (Fitchett, 2017; Handzo et al., 2008). The enthusiasm for evidence-based chaplaincy was expressed in the early 2000s after the explosion of evidence-based medicine (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). Chaplains and staff were encouraged to introduce checklists to identify patient needs (Fitchett, 2017; Handzo et al., 2008). There was a desire to professionalize spiritual care and modernize hospital chaplaincy (Proserpio, Piccinelli, & Clerici, 2011). Professionalization was described as improving relations to the scientific world, certifying chaplains and evaluating efficacy of spiritual care. This led to a debate on whether chaplains should adopt a scientific approach even at the risk of reductionism (Proserpio et al., 2011). The question is whether quantification of spiritual needs goes against the fundamental nature of spiritual care? Even the founders of evidence-based medicine warned against excessive quantification that could dehumanize medicine and threaten clinical vitality (Sepers & ter Meulen, 2005).
Coping strategies for developmental prosopagnosia
Published in Neuropsychological Rehabilitation, 2020
Amanda Adams, Peter J. Hills, Rachel J. Bennetts, Sarah Bate
Content analysis: Elo and Kyngas (2008) approach to content analysis was adopted to explore issues related to the disclosure of the condition, in order to provide a systematic and objective means of describing and quantifying the data. Quantification allows for the data to be characterized in a way that is potentially reliable and valid; making replicable and valid inferences from the data to their context, with the purpose of providing knowledge, new insights, a representation of facts and a practical guide to action (Krippendorff, 1980). This method aims to attain a condensed and broad description of the phenomenon, with the outcome of analysis being the development of categories that describe the phenomenon. These categories are then used to build a model or conceptual system (Elo & Kyngas, 2008), and content validation for the analytical process is achieved via the use of co-researchers who are responsible for supporting category production and coding issues.