Explore chapters and articles related to this topic
An Insight of Deep Learning Applications in the Healthcare Industry
Published in Rekh Ram Janghel, Rohit Raja, Korhan Cengiz, Hiral Raja, Next Generation Healthcare Systems Using Soft Computing Techniques, 2023
Deevesh Chaudhary, Prakash Chandra Sharma, Akhilesh Kumar Sharma, Rajesh Tiwari
Kbot is a customized chatbot with information that was developed for asthma patients for self-management [25]. Nursebot [26] is a chatbot that can assist with self-management of chronic disease. By super positioning, it emulates anatomy (judgement) and shows self-organization, as well as cognitive and emotional responses. It allows for communicative healthcare management. PARRY, a popular chatbot, was created to impersonate a paranoid patient [27]. The popularity of mental health chatbots has grown to the point where companies have dubbed chatbots as the “the future of treatment.” A digital chatbot to assist medical staff by reducing the chance of older persons being admitted to the hospital was also investigated, with encouraging results.
Psychiatric Chatbot for COVID-19 Using Machine Learning Approaches
Published in Roshani Raut, Salah-ddine Krit, Prasenjit Chatterjee, Machine Vision for Industry 4.0, 2022
Priyanka Jain, Subhash Tatale, Nivedita Bhirud, Sanket Sonje, Apurva Kirdatt, Mihir Gune, N. K. Jain
The idea of using chatbots for mental health problems dawned in 1972 when PARRY, a program that could mimic the behavior of a human with schizophrenia, was developed. During the last several years, due to the development of better and better conversational algorithms, several mental health chatbots have been in use. Confidentiality is still one of the major concerns but a lot of research is being done in this field. There are chatbots that are being targeted toward specific mental illnesses too.
A Brief History of Artificial Intelligence
Published in Ron Fulbright, Democratization of Expertise, 2020
In 1972, Kenneth Colby implemented a chatbot called PARRY (Colby, 1972; Guzeldere, 1995). While ELIZA was a simulation of a Rogerian therapist, PARRY attempted to simulate a person with paranoid schizophrenia PARRY was more advanced than ELIZA from a conversational standpoint.
How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Design
Published in International Journal of Human–Computer Interaction, 2021
Ana Paula Chaves, Marco Aurelio Gerosa
Regarding the strategies, the surveyed literature suggests [S1] to design and elaborate on a persona. Chatbots should have a comprehensive persona and answer agent-oriented conversations with a consistent description of itself (Neururer et al., 2018; Q. v. Liao et al., 2018). For example, De Angeli (2005) discuss that Eliza, the psychotherapist chatbot, and Parry, a paranoid chatbot, have behaviors consistent with the stereotypes associated with the professional and personal identities, respectively. Toxtli and Cranshaw et al. (2018) suggest that designers should explicitly build signals of the chatbot personification (either machine- or human-like), so the users can have the right expectation about the interaction. When identity aspects are not explicit, users try to establish common ground. In Q. v. Liao et al. (2018) and (Silvervarg & Jönsson, 2013), much of the small talk with the chatbot related to the chatbot’s traits and status. In De Angeli, Johnson et al. (2001), the authors observed many instances of Alice’s self-references to “her” artificial nature. These references triggered the users to reflect on their human-condition (self-categorization process), resulting in exchanges about their species (either informational or confrontational). Thies et al. (2017) observed similar results, as participants engaged in conversations about the artificial nature of the agent. Providing the chatbot with the ability to describe its personal identity helps to establish the common ground, and hence, enrich the interpersonal relationship (De Angeli, Johnson et al., 2001).