Explore chapters and articles related to this topic
Natural Language Processing
Published in Subasish Das, Artificial Intelligence in Highway Safety, 2023
The goal of syntactic parsing is to find out whether an input sentence is in a given language or to assign the structure to the input text. In order to assign the structure, the grammar of a language is needed. Since it is generally not possible to define rules that would create a parse for any sentence, statistical or machine learning parsers are very important. Complete parsing is a very complicated problem because ambiguities often exist. In many situations, it is enough to identify only unambiguous parts of texts. These parts are known as chunks, and they are found using a chunker or shallow parser. Shallow parsing (chunking) is thus a process of finding non-overlapping groups of words in the text that have a clear structure. Figure 50 illustrates the steps of NLP analysis, and Figure 51 shows examples of stemming and lemmatization.
Conditional random fields, syntactic parsing, and more
Published in Jun Wu, Rachel Wu, Yuxi Candice Wang, The Beauty of Mathematics in Computer Science, 2018
Sentence parsing in natural languages generally refers to analyzing a sentence according to syntax and constructing this sentence's parse tree, i.e., syntactic parsing. Sometimes it can also refer to analysis of each sentence's component semantics, which results in a description of the sentence's semantics (such as a nested frame structure, or a semantic tree), i.e., semantic parsing. The topic of our discussion in this chapter is the first kind, which is the syntactic parsing of sentences. The research in this field used to be influenced by formal linguistics and used rule-based methods. The process of constructing this parse tree was to continually use the rules to merge the end nodes step-by-step, until the root node, which is a whole sentence. This is a bottom-up method. Of course, it could also proceed from the top down. Regardless of its direction, there is an inevitable problem, which is that it is impossible to choose the correct rules all at once. One misstep requires retracing of many steps. Therefore, both of these methods’ computations are extremely complex, and it is impossible to analyze complicated sentences.
Automatic Speech Recognition for Large Vocabularies
Published in John Holmes, Wendy Holmes, Speech Synthesis and Recognition, 2002
The process of determining the linguistic structure of a sentence is known as parsing. Syntactic structure is usually expressed in terms of a formal grammar, and there are a variety of grammar formalisms and associated methods for syntactic parsing. However, traditional parsers aim to recover complete, exact parses. This goal will often not be achievable for spoken language, which tends to contain grammatical errors as well as hesitations, false starts and so on. The problem is made even worse when dealing with the output of a speech recognizer, which may misrecognize short function words even when overall recognition performance is very good. It can therefore be useful to adopt partial parsing techniques, whereby only segments of a complete word sequence are parsed (for example, noun phrases might be identified). Other useful tools include part-of-speech taggers, which aim to disambiguate parts of speech (e.g. the word “green” acts as an adjective in the phrase “green coat” but as a noun in “village green”), but without performing a parsing operation. Some taggers are rule-based, but there are also some very successful taggers that are based on HMMs, with the HMM states representing tags (or sequences of tags). Transition probabilities are probabilities of tag(s) given previous tag(s) and emission probabilities are probabilities of words given tags. Partial parsing and part-of-speech tagging enable useful linguistic information to be extracted from spoken input without requiring a comprehensive linguistic analysis.
Survey on frontiers of language and robotics
Published in Advanced Robotics, 2019
T. Taniguchi, D. Mochihashi, T. Nagai, S. Uchida, N. Inoue, I. Kobayashi, T. Nakamura, Y. Hagiwara, N. Iwahashi, T. Inamura
To conduct the logical inferences described earlier, syntactic parsing should be perfected in advance to be suitable for real-world communication. For example, the robot in Figure 1 is inferring the latent syntactic structure of the sentence given, and understands it needs to bring ‘the bottle’, not ‘the kitchen'. Syntactic parsing is indispensable for semantic parsing, semantic role identification, and other semantics-driven tasks in NLP. Syntactic parsing can essentially be categorized as follows, in the current practice of NLP [58]: (a) dependency parsing, (b) constituent parsing, such as context-free grammars (CFG), tree adjoining grammars (TAG), and (c) combinatory categorial grammars (CCG).
Story Analysis Using Natural Language Processing and Interactive Dashboards
Published in Journal of Computer Information Systems, 2022
NLP involves a blend of artificial intelligence, computer science, machine learning, and computational linguistics. NLP systems perform many tasks necessary for making sense of text or speech recognition. Some of these are grammatically focused, such as parts-of-speech (POS) tagging and syntactic parsing. Others are based on recognizing co-occurrences of entities in a document (coreference resolution), recognizing named entities, and interpreting temporal expressions. At a deeper level, NLP forms a venue for attempting to infer the underlying meaning of text; this has historically been termed “natural language understanding”.,12