Explore chapters and articles related to this topic
Concluding Remarks
Published in John Atkinson-Abutridy, Text Analytics, 2022
The process of text mining comprises several activities that enable users to uncover information from unstructured text data. Before you can apply different text mining techniques, one must start with text preprocessing, which is the practice of cleaning and transforming text data into a usable format. This practice is a core aspect of NLP and it usually involves the use of techniques such as language identification, tokenization, part-of-speech tagging, chunking, and syntax parsing to format data appropriately for analysis. When text preprocessing is complete, you can apply text mining algorithms to derive insights from the data. Some of these common text mining techniques include information retrieval (i.e., tokenization, stemming), Natural-Language Processing (i.e., part-of-speech tagging, summarization, categorization, sentiment analysis), and information extraction (i.e., named-entity recognition, feature selection and extraction).
Deep Probabilistic Machine Learning for Intelligent Control
Published in Alex Martynenko, Andreas Bück, Intelligent Control in Drying, 2018
Machine learning today is a large field of study that comprises many techniques. Deep learning is an area within machine learning that has recently received a lot of attention as it has advanced some major application areas such as computer vision and natural language processing. The excitement is to a large extent based on representational learning that is not part of the older techniques such as support vector machines (SVMs) or random forest classifiers (RFCs). However, this does not mean that either SVMs and RFCs or even the more specific models mentioned in this book are obsolete within applications in the food industry. Quite the contrary, the limited amount of data often requires a more prudent modeling approach, and the robustness of these methods aids considerably in their applicability to many industrial problems. However, the new techniques add novel possibilities for solving problems and increasing efficiencies, which can in turn give some industries an advantage. There are even approaches that go further than deep learning in considering implications of data beyond a point estimate as a single model. Such models, which are more generally discussed within a Bayesian approach, are also outlined in the discussions in this chapter in an attempt to provide a broad overview of the ideas behind modern modeling techniques.
Information Technology for Communication and Cognitive Support
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Alan F. Newell, Alex Carmichael, Peter Gregor, Norman Alm, Annalu Waller, Vicki L. Hanson, Graham Pullin, Jesse Hoey
Natural-language generation (NLG) is an area of research in natural language processing by computer. NLG systems have been harnessed to support language development for children with language disorders by providing opportunities to extend both their vocabulary and the type of conversation in which they can engage (Waller 2006). The STANDUP project (Waller et al. 2009) demonstrated the use of pun generating technology for children with CCN. A graphic-based interface was designed to provide children with independent access to novel puns. An evaluation of the system with nine children with cerebral palsy showed that all the children were able to generate increasingly sophisticated puns. Language tests administered before and after a 4-week intervention program indicated an increase in the children’s ability to categorize words into groups. Although no generalization can be made, it suggests that such systems can have an impact on underlying cognitive and language abilities.
Managing demand volatility of pharmaceutical products in times of disruption through news sentiment analysis
Published in International Journal of Production Research, 2023
Angie Nguyen, Robert Pellerin, Samir Lamouri, Béranger Lekens
NLP is the subdomain of AI that uses linguistics, computer science, and analytics to make computers understand human language (e.g. in textual contents or audio records). In particular, sentiment analysis refers to the task in NLP that aims to identify sentiments, emotions, and connotations from texts by extracting a polarity (i.e. positive, negative, or neutral) (Sun et al. 2019). Recently, authors have also proposed more advanced techniques to provide finer sentiments information. For example, Gaspar et al. (2016) leveraged Twitter data to identify specific emotions (e.g. hope, fear, confidence) during stressful events. Sentiment analysis has also been applied to extract an aspect-based (e.g. food, atmosphere) polarity from restaurant customer reviews (Zuheros et al. 2021). In practice, two main approaches are usually adopted to perform this task: on the one hand, lexicon-based models compute a global score for each input text given a dictionary of terms and their polarity (i.e. positive, negative, or neutral); on the other hand, machine learning based algorithms fit most of the time classification models to input annotated data. In a literature review, Sun et al. (2019) outlined that, as for most tasks in NLP, although machine learning techniques often yield better performance, they also require large amounts of annotated data, which are usually lacking when dealing with specialised data (e.g. medicinal-related texts) in other languages than English.
Story Analysis Using Natural Language Processing and Interactive Dashboards
Published in Journal of Computer Information Systems, 2022
NLP involves a blend of artificial intelligence, computer science, machine learning, and computational linguistics. NLP systems perform many tasks necessary for making sense of text or speech recognition. Some of these are grammatically focused, such as parts-of-speech (POS) tagging and syntactic parsing. Others are based on recognizing co-occurrences of entities in a document (coreference resolution), recognizing named entities, and interpreting temporal expressions. At a deeper level, NLP forms a venue for attempting to infer the underlying meaning of text; this has historically been termed “natural language understanding”.,12
Deep Learning-Based Model Using DensNet201 for Mobile User Interface Evaluation
Published in International Journal of Human–Computer Interaction, 2023
Deep learning (DL) is a branch of machine learning that has emerged immensely and has been used to solve many complex problems in different fields, such as speech recognition, natural language processing, and computer vision (Canziani et al., 2016). Indeed, DL offers several algorithms which are used to unveil the features of the data that are concealed in the original data. To this end, Convolutional Neural Network (CNN) has been highly deployed to extract those features, and this exclusive characteristic has been hugely implemented in images.