Explore chapters and articles related to this topic
AI-Driven Leadership
Published in Tom Lawry, Hacking Healthcare, 2022
Human in the loop (HITL): A human is assisted by a machine. In this model, the human is making the decision, and the machine provides only decision support or partial automation of some decisions or parts of decisions. This is often referred to as intelligence amplification (IA).
Platform-Driven Pandemic Management
Published in Ram Shringar Raw, Vishal Jain, Sanjoy Das, Meenakshi Sharma, Pandemic Detection and Analysis Through Smart Computing Technologies, 2022
Jayachandran Kizhakoot Ramachandran, Puneet Sachdeva
Natural language processing (NLP) and understanding has been among for a while now. Its more advanced version, Conversational AI has garnered visibility. Conversational AI is a branch of AI that deals with human like interactions. For example, the voice assistants that are available in the form of Siri, Alexa, Google Assistant have their roots in Conversational AI. The applications span across the areas of health care, contact center, and chat-bots. In the area of healthcare, it is used to equip patients with information about the CoVs, assess their risk by asking a series of questions to arrive at risk profile, support preventive care, and guidance to local resources. In the contact center, applications powered with NLP, NLU, and Conversational AI are ensuring to meet the unprecedented inflow of queries, enquiries, and S.O.S calls. The applications are supported by human in the loop for interventions that require handling by human agents.
Artificial Intelligence Based COVID-19 Detection using Medical Imaging Methods: A Review
Published in S. Prabha, P. Karthikeyan, K. Kamalanand, N. Selvaganesan, Computational Modelling and Imaging for SARS-CoV-2 and COVID-19, 2021
M Murugappan, Ali K Bourisly, Palani Thanaraj Krishnan, Vasanthan Maruthapillai, Hariharan Muthusamy
Researchers used 3-D CNN that combines V-Net architecture with a bottle-neck structure to enhance the quality of chest CT scan images for COVID-19 detection (Shan et al. 2020). Because the raw chest CT scan images usually have low contrast, it is challenging to locate the GGO or mGGO in the scan images. Besides, they used the human-in-the-loop strategy (HITL) to reduce the requirement of a radiologist in locating the infected regions in the lung CT scan images to train the proposed model. They divided the training images into a set of batches; the first batch gets feedback from the radiologist on locating the infected regions in the lung. After that, these images are used to train the model, which will automatically locate the infected regions in the second batch. Here, the radiologist corrects any misinterpretations of the model. It is the first work in COVID-19 detection which utilizes HITL model to develop an intelligent system using chest CT scan images. Two performance measures such as dice- similarity coefficient (DSC), and Pearson correlation coefficient (PCC) are used to classify the COVID-19 subjects into three classes: mild; moderate; and severe. Here DSC is used to measure the percentage of different opinions in detecting infection regions identified by the radiologists and the automated method of detection using a deep-learning model. The POC is used to identify the percentage of lung region infected due to COVID-19, compared to normal lung region. The average value of DSC and POI over three cases are 91.6% and 86.7%, respectively.
Research on the Clinical Translation of Health Care Machine Learning: Ethicists Experiences on Lessons Learned
Published in The American Journal of Bioethics, 2022
Jennifer Blumenthal-Barby, Benjamin Lang, Natalie Dorfman, Holland Kaplan, William B. Hooper, Kristin Kostick-Quenet
Implementing a fourth stage of continual reevaluation can help to:Detect potential creep of algorithmic biases into the ML model(s) over time, particularly as standards of care evolve and populations served by the model may change.Account for potential alterations in relevant clinical inputs for the condition being addressed by the ML model.Adhere to the ethical principle of maintaining human-in-the-loop (Braun et al. 2021).Serve as a stopgap for user cognitive and decisional biases, such as over-reliance and automation bias, that may develop over time without periodic reevaluation.
Changes in Muscle Activity in Response to Assistive Force during Isometric Elbow Flexion
Published in Journal of Motor Behavior, 2020
Ping Yeap Loh, Keisuke Hayashi, Nursalbiah Nasir, Satoshi Muraki
The effective use of PAS requires synergistic human-machine cooperation to leverage the adaptability and flexibility of human operators and the precision and power of PAS (Gams, Petric, Debevec, & Babic, 2013; Giuliani, Lenz, Müller, Rickert, & Knoll, 2010). Therefore, the human operator needs to estimate the task effort and then integrate with the provided augmentative force. However, human operators may encounter a sudden acceleration or deceleration of the PAS if the required augmentative force is over-estimated or under-estimated, respectively. As a consequence, the instability in PAS maneuverability may result in accidents or injuries. Thus, agonist-antagonist muscles need to fine-tune to the overall intended movement to adapt to changes in workload as an adjunct to the applied assistive force. Additionally, human-in-the-loop approaches have been proposed to promote a desired human-robotic system (Walsh, 2018).
An AI Bill of Rights: Implications for Health Care AI and Machine Learning—A Bioethics Lens
Published in The American Journal of Bioethics, 2023
Presumably, this principle aims to achieve the idea of keeping a “human in the loop”—an idea embraced in AI technology and development. Human involvement in and governance of AI technologies is important for ethics of use, but we must not lose sight of human biases and the existence of algorithm aversion (anti-AI bias). Patients might be quick to request a human alternative, but for bad reasons, and when the human alternative might be worse. Consider an AI chat bot psychiatrist vs. a human psychiatrist where a patient opts out and requests a human due to algorithm aversion bias, but the human is less accurate at diagnosis, and prescribes medicine or conducts therapy in a less evidence-based way than the bot would. In light of this, I like how this principle is formed (as an opt-out) where the default might be an AI-system if it is safe and effective. Another caution with this principle concerns involving a human to “consider and remedy” problems with automated systems—given humans’ overconfidence biases and anti-AI biases, humans such as physicians might be too quick to go against or counter the judgments of the AI system but be wrong. For example, an AI based morality calculator may predict that a patient has a 3 month survival chance of only 20%, but the physician dismisses the judgment for a much higher survival chance but is wrong—now the patient has exercised their “human alternative” right, but this right has made them worse off, and they haven’t exercised it very autonomously because their preference was driven by a slew of biases rather than considered choice. Or, physicians may deploy or fail to deploy the results of the AI in biased or heterogenous ways, e.g. over-riding for white patients but using AI for black patients.