Quality Control and Quality Assurance
Niel T. Constantine, Johnny D. Callahan, Douglas M. Watts in Retroviral Testing, 2020
By examining the entire laboratory operation as a whole, it becomes apparent that human error is the primary source of system variability in the HIV diagnostic system. Human error is then further categorized as that due to either technical or system error. Technical error can be minimized by reinforcing education, training, motivational approaches, and by rechecking all aspects of laboratory performance. System error, however, is more difficult to address since it is often manifested in subtle changes that may not be readily apparent until a failure occurs. System failures, although random, infrequent, and unintentional, can result in major consequences. System error can be made to nearly approach zero by incorporating an increased degree of redundancy into the system design. Redundancy in the system refers to overlapping measures that “foolproof the system and consequently place a much greater degree of reliability in the quality of results. A more reliable system is inherently a more efficient system. Although many methods are available to increase redundancy in the HIV testing algorithm, the following example will illustrate how the use of two different ELISA tests can increase the reliability, and thus the efficiency of the HIV screening process.
Data Sharing and Toxicity Modelling
Tiziana Rancati, Claudio Fiorino in Modelling Radiotherapy Side Effects, 2019
When data is shared within a centralized architecture, the infrastructure itself has complete control of all of the data. This data is not stored in each of the individual clinics, but must be pooled in a centralized repository. In this situation, all operations occur at a central location and no real-time communication occurs between participating institutions. Even though this architecture type is conceptually simple, the ethical barriers mentioned above have to be managed at the central meta-institutional level. Furthermore, as new data fields are added, or the existing data elements are periodically re-organized in some fashion, the effort of centralized data management increases much more rapidly than either size or complexity. Additionally, there is redundancy related to duplication of data, transformation of the local data to the central data model (usually resulting in manual data entry and/or manual copying) and negotiation of intellectual property (IP) rights in the form of data ownership agreements. National cancer registries and health service quality monitoring databases commonly adopt a centralized data architecture.
Medical Tourism And Information Technology
Frederick J. DeMicco, Shirley Weis in Medical Tourism and Wellness, 2017
The second aspect in the follow-up with guests is by conducting medical checks every few months after the surgery. This can be implemented by having the guests answer a few questions about their health. Did the surgery help, was it effective, would they go back to them again? These questions are important in monitoring the physical health of the patients as well as maintaining a positive relationship. Technology can implement this by using a trigger. A trigger is defined as something that sets off another event. This reminds the hospital that they need to send a report every 3 months after the visit for up to a year or so depending on the procedure. This system automatically sends an email with the attached questionnaire to ensure no patient is forgotten. Connecting these services with the guest history is important as well. Making sure all the systems are integrated helps reduce redundancy.
Multi disease-prediction framework using hybrid deep learning: an optimal prediction model
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2021
Anusha Ampavathi, T. Vijaya Saradhi
The main aim of data normalization is to change the values in the dataset from numeric columns to an ordinary scale, without altering the range variation of values. In ML, normalization of data is not needed for every dataset, whereas normalization is necessary when features contain diverse ranges. The data normalization is employed for storing of data at each row only once, thus avoids redundancy of data. The data normalization uses lesser space than the other. The primary concern needed for collecting and storing data requires a massive amount of memory. The data normalization is used in reducing the disk space. It has the flexible database design. A data redundancy is occurred when data is tried to store at twice or more. Data normalization is defined as “adjusting values measured on different scales to a notionally common scale, often prior to averaging”. Consider the data attribute as Eq. (9).
Life during Covid-19: An Explorative Qualitative Study of Occupational Therapists in South Africa
Published in Occupational Therapy in Mental Health, 2023
Henna Nathoo, Thavanesi Gurayah, Deshini Naidoo
Personal narratives were used as an additional source of data collection, from participants who volunteered to submit a narrative upon completion of the semi-structured interview. Narratives represented free style writings that were shared by participants, with the intention of creating a deeper sense of understanding and meaning toward selected life experiences (Riessman, 2008). These subjective descriptions share the unique perceptions and feelings that participants associate with life events, which add to the depth of the experience and is done in the participants’ own time which allowed for deeper reflection (Gilbert, 2000). Participants were asked to reflect upon their occupational engagement in light of the Covid-19 pandemic. Eleven participants gave consent and emailed their personal narratives to the author. Data was collected until redundancy was achieved. Redundancy was achieved when similar trends or information in the data was emerging in the interviews.
Measuring emotion perception following traumatic brain injury: The Complex Audio Visual Emotion Assessment Task (CAVEAT)
Published in Neuropsychological Rehabilitation, 2019
Hannah Rosenberg, Skye McDonald, Jacob Rosenberg, Reginald Frederick Westbrook
The high internal consistency of CAVEAT provided confidence in the reliability of the measure. It also suggests that CAVEAT measures a single uniform ability, i.e., emotion perception, rather than a set of inter-related abilities, such as the ability to process negative versus positively valanced emotions, or social versus non-social emotions. The high reliability also points to redundancy between items. A short version of the CAVEAT might prove to have similar validity to the full version while providing a briefer, more clinically practical measure. There is also need for test-retest reliability studies to be conducted on CAVEAT in order to provide a comprehensive overview of its reliability over time.
Related Knowledge Centers
- Depth Perception
- Fail-Safe
- Safety-Critical System
- Triple Modular Redundancy
- Dual Modular Redundancy
- Active Redundancy