Explore chapters and articles related to this topic
Role of Computational Intelligence in Natural Language Processing
Published in Brojo Kishore Mishra, Raghvendra Kumar, Natural Language Processing in Artificial Intelligence, 2020
Bishwa Ranjan Das, Brojo Kishore Mishra
Natural language processing (NLP) is a subfield of computer science, artificial intelligence (AI) concerned with the interactions between computers and human (natural) languages, in particular how to program computers to understand, process, and analyze large amounts of natural language data. Two major things are there, one is natural language understanding (NLU) and natural language generation (NLG). NLP is purely based on grammar-based (rule-based) and statistical-based. In the early days, many language-processing systems were designed by hand-coding a set of rules, e.g., by writing grammars or devising heuristic rules for stemming. However, this is rarely robust to natural language variation. Since the so-called “statistical revolution” [11, 12] in the late 1980s and mid-1990s, much NLP research has relied heavily on machine learning.
Speaking Naturally: Text and Natural Language Processing
Published in Jesús Rogel-Salazar, Advanced Data Science and Analytics with Python, 2020
The tasks achieved by the loyal androids to take natural language as input, and make sense out of it are referred to as natural language processing. Natural language processing (NLP) is an area of computer science and artificial intelligence concerned with the interactions between human (natural) languages and computers. In particular, the term refers to the programming of computers for processing and analysing large amounts of natural language data. Some typical challenges in the area go from natural language understanding to speech recognition and even language generation. Natural language processing is concerned with the interactions between human language and computers.
Natural Language Processing Viewed from Semantics
Published in Masao Yokota, Natural Language Understanding and Cognitive Robotics, 2019
Natural language processing refers to all fields of research concerned with the interactions between computers and human (natural) languages. However, it is often associated specifically with programing computers to process and analyze large amounts of natural language data where natural language generation is detached from it or its accessory without much elaborated sophistication. Historically, natural language processing started with machine translation (Hutchins, 2006) which necessarily requires natural language generation. Here, from the standpoint of natural language understanding, natural language processing means especially what involves a pair of processes to analyze text into certain intermediate representation and to generate text from it (directly or indirectly), where the languages for analysis and generation can be either the same or not. Automatic paraphrasing (e.g., Schank and Abelson, 1977) and text summarization within one language (e.g., Yatsko et al., 2010), and Kana/Kanji (Japanese syllables/Chinese characters) conversion in Japanese (www.pfu.fujitsu.com/hhkeyboard/kb_collection/#nec) are examples of the former. On the other hand, machine translation is the typical example of the latter where cross-language operation is inevitably performed by computers. This chapter considers machine translation as the representative natural language processing and emphasizes the necessity of natural language understanding as the genuine natural language processing.
“I Am Here to Assist Your Tourism”: Predicting Continuance Intention to Use AI-based Chatbots for Tourism. Does Gender Really Matter?
Published in International Journal of Human–Computer Interaction, 2023
Banghui Zhang, Yonghan Zhu, Jie Deng, Weiwei Zheng, Yang Liu, Chunshun Wang, Rongcan Zeng
AI will be at the heart of HCI technology and industry in the near future (Harper, 2019). Recently, an alternative form of HCI has gained traction known as CUI. It allows users to interact with computer systems via spoken dialogue or text for information access (McDonnell & Baxter, 2019). Along with the emergence of CUI, considerable attention has been given to making HCI more natural and humanlike (Lee et al., 2020). As a type of CUI, AI-based chatbots become increasingly popular in a variety of domains, such as task management (Aoki, 2020), mental healthcare (Zhu, Janssen, et al., 2022), and banking (Hari et al., 2022). An AI-based chatbot usually uses natural language processing, machine learning, and other AI technologies. Commonly, the interaction between users and chatbots is triggered by users’ inputs, such as wake-up call and inquiry (McTear et al., 2016). The contextual awareness technologies allow chatbots to wait until the systems receive a message before taking the turn (Pearl, 2016).
Partnering with AI: the case of digital productivity assistants
Published in Journal of the Royal Society of New Zealand, 2023
Jocelyn Cranefield, Michael Winikoff, Yi-Te Chiu, Yevgeniya Li, Cathal Doyle, Alex Richter
MMA also provides actionable advice in the form of suggestions for how individuals can change their behaviour. For example, MMA makes individuals aware of when they spend too much time in meetings or multi-tasking, and suggests that they plan more ‘focus time’ (time that is free of meetings and other interruptions) in their calendar in advance. MMA analyses a range of information in the Office 365 ecosystem including collaboration (meta) data (e.g. from emails, calendars and Microsoft Teams), employing a range of AI techniques (‘AI-powered suggestions’, Janardhan 2019). For example, natural language processing is used to analyse email content in order to identify assigned tasks. MMA applies a range of behavioural techniques, such as comparative metrics and evidence-based justifications for particular changes, in order to nudge users towards a desired behaviour. By acting as an analyst and adviser on productivity-related behaviours MMA, and other DPAs, can be seen as emergent collaboration partners for knowledge workers; an application of the emergent phenomenon of machine-human collaboration (Seeber et al. 2018, 2020).
Mining typhoon victim information based on multi-source data fusion using social media data in China: a case study of the 2019 Super Typhoon Lekima
Published in Geomatics, Natural Hazards and Risk, 2022
Novel and additional disaster information is presented by crowdsourcing through social media (Simon et al. 2015), especially during tropical cyclone disaster management. Li et al. (2021) concluded that using social media in cyclone disasters serves three major purposes: emotional expression, situational updates, and disaster-related information exchange. Many studies on social media converge on themes of sentiment analysis and emergency response. For example, Wu and Cui (2018) developed a data processing framework and analyzed Twitter (www.twitter.com) postings using multidimensional analysis containing reverse geocoding, emotion analysis, subject tags, and high-frequency methods to track public responses to Superstorm Sandy. Spruce et al. (2020) applied detailed sentiment analysis on the public's negative feelings about the storm to measure the impact of storms. Additionally, with sentiment analysis on calculation of the emotional intensity of an event, Ragini et al. (2018) used a data-driven approach for disaster response, while Yuan and Liu (2020) assessed the damage caused by Hurricane Matthew. Sentiment analysis is a common method of natural language processing, which is based on text word analysis to determine the specific sentiment and other markers contained in it (Feldman 2013). The sentiment changes revealed in social media are thought to reflect temporal and spatial mood changes in society (Wu and Cui 2018). The above studies showed that sentiment analysis is a common step when using social media for the disaster management of cyclones.