Explore chapters and articles related to this topic
Conclusions and Future Work
Published in Shalom Lappin, Deep Learning and Linguistic Representation, 2020
Regardless of one's views on quantum mechanics, there is an important distinction between this case and the representation of linguistic knowledge as a probabilistic system. Physics is concerned with the laws governing events in the physical world. There is at least an intuitive appeal to Einstein's insistence on a world in which universal laws specify determinate causal relations. By contrast, linguistic knowledge is a human cognitive object. The sort of indeterminacy permitted by probability models seems entirely appropriate for modelling human cognition. The only prior conceptual support available to the formal grammar view of natural language is the tradition of applying formal language theory to natural languages. But, the viability of this enterprise is precisely what is under challenge in this discussion. Most of human knowledge consists in making judgements under conditions of uncertainty, with limited amounts of information. Probability models are designed to capture this process of learning and inference. The experimental work in deep learning in NLP that we have explored in previous Chapters indicates that linguistic knowledge is well suited to representation within this paradigm.
Automatic Speech Recognition for Large Vocabularies
Published in John Holmes, Wendy Holmes, Speech Synthesis and Recognition, 2002
The process of determining the linguistic structure of a sentence is known as parsing. Syntactic structure is usually expressed in terms of a formal grammar, and there are a variety of grammar formalisms and associated methods for syntactic parsing. However, traditional parsers aim to recover complete, exact parses. This goal will often not be achievable for spoken language, which tends to contain grammatical errors as well as hesitations, false starts and so on. The problem is made even worse when dealing with the output of a speech recognizer, which may misrecognize short function words even when overall recognition performance is very good. It can therefore be useful to adopt partial parsing techniques, whereby only segments of a complete word sequence are parsed (for example, noun phrases might be identified). Other useful tools include part-of-speech taggers, which aim to disambiguate parts of speech (e.g. the word “green” acts as an adjective in the phrase “green coat” but as a noun in “village green”), but without performing a parsing operation. Some taggers are rule-based, but there are also some very successful taggers that are based on HMMs, with the HMM states representing tags (or sequences of tags). Transition probabilities are probabilities of tag(s) given previous tag(s) and emission probabilities are probabilities of words given tags. Partial parsing and part-of-speech tagging enable useful linguistic information to be extracted from spoken input without requiring a comprehensive linguistic analysis.
Introduction
Published in John N. Mordeson, Davender S. Malik, Fuzzy Automata and Languages, 2002
John N. Mordeson, Davender S. Malik
The concept of fuzzy subsets has been incorporated into the syntactic approach at two levels. The pattern primitives are themselves considered to be labels of fuzzy sets, e.g., such subpatterns as “almost circulars arcs”, “gentle”, “fair”, “sharp” curves are considered. Also, the structural relations among the subpatterns may be fuzzy, so that the formal grammar is fuzzified by the weighted production rules and the grades of memberships of a string are obtained by max-min composition of the grades of the productions used in the derivation. When the pattern primitives are extracted from an image with low quality or a deformed pattern, the min operation in the max-min composition of the grades of production is sensitive to distortion of primitives. In such a case, the above median operator, which is well known to be useful for noise suppression, preserves the grade of membership of primitives better than min operators. Consequently, supmed-composition rather than max-min may work well if the parameter p in (10.5) is decided appropriately.
The Use of Context-Free Probabilistic Grammar to Anonymise Statistical Data
Published in Cybernetics and Systems, 2020
In this section, we will present a proprietary method of anonymising individual data using the properties of context-free grammar. Here are two basic definitions that will be used when discussing this method.Context-free grammar is called formal grammar of type 2 according to Chomsky’s hierarchy, i.e. ordered four (T, N, P, S), where:T is a finite collection of terminal symbols,N is a finished collection of symbols of nonterminal Ni,P is a finite set of transcription rules L R, L N, R (T *,S N is a distinguished initial symbol.Probabilistic, context-free (Probabilistic Context-Free Grammar – PCFG) is a context-free grammar that includes the probabilities of its production rules and is denoted by the symbol Production probabilities are assigned by observing that the sum of probabilities of rules with the same predecessor is 1.
Enhancing website security against bots, spam and web attacks using lCAPTCHA
Published in International Journal of Computers and Applications, 2023
S. Vaithyasubramanian, D. Lalitha, C. K. Kirubhashankar
In 1956, Noam Chomsky portrayed the hierarchy of Formal Grammars. Chomsky classified Formal Grammars depending upon the production rues into four categories. CFG – Context-Free Grammar is type-2 Grammar. The production rules make them to be used widely in various applications. Initially it was used to study human languages. CFG gives all the proficient mechanisms for Language Specification. In the fields of system technology and Linguistics, CFG plays an important role in programming language and in NLP structures. For implementation and design of compilers, CFGs are used as the basis. To execute the syntax of compilers, it requires parsers. Parsers can be expressed in CFGs.
Spatial specification and reasoning using grammars: from theory to application
Published in Spatial Cognition & Computation, 2018
Yufeng Liu, Kang Zhang, Jun Kong, Yang Zou, Xiaoqin Zeng
With strict mathematical definition and operation, formal grammars provide a solid theoretical foundation for defining and specifying various languages, including programming languages, visual languages, graph modeling languages, unified modeling languages, business process execution languages, etc. On the other hand, spatial relationships and semantics are crucial in many applications, such as graphical user interfaces and geospatial systems. It is therefore necessary to investigate both the structural and spatial specification mechanisms in a formal grammatical setting.