Explore chapters and articles related to this topic
Finite state machines and dynamic programming: Navigation in Google Maps
Published in Jun Wu, Rachel Wu, Yuxi Candice Wang, The Beauty of Mathematics in Computer Science, 2018
In Figure 12.5, state q4 is defined as “inputs of is/are, outputs of better/worse.” Regardless of the rest of the sequence, as long as the symbols before and after match, then an input can enter this state. Other states can have different inputs and outputs. Furthermore, if transition probabilities differ among the states, then we have a weighted finite state transducer. With respect to natural language processing, you may recall that bigram model from Chapter 2 also considers pairs of words. Thus, the WTST naturally maps itself to a sentence. Any sentence recognizable by a speech recognition system can be represented by a WFST. Each path through the WFST is a potential sentence, and the sentence with the highest probability is the speech recognition result. To find that sentence, we use dynamic programming, as described earlier.
Coding-theorem like behaviour and emergence of the universal distribution from resource-bounded algorithmic probability
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Hector Zenil, Liliana Badillo, Santiago Hernández-Orozco, Francisco Hernández-Quiroz
The Chomsky hierarchy is a strict containment hierarchy of classes of formal grammars equivalent to different computational models of increasing computing power. At each of the 4 levels, grammars and automata compute a larger set of possible languages and strings. The four levels, from weaker to stronger computational power, are:The most restricted grammars generating the regular languages. They consist of rules with single non-terminal symbols on the left-hand side and strings of terminal symbols followed by at most one non-terminal symbol on the right-hand side. These types of rule are referred to as right linear (but a symmetrical left linear definition works as well). This level is studied by way of finite-state transducers (FST), a generalisation of finite-state automata (FSA) that produce an output at every step, generating a relation between input strings and output strings. Though apparently more general, FST-recognised languages are the same as FSA-accepted sets. Hence both can represent this level. We use an enumeration of transducers introduced in [24] where they also proved an invariance theorem [1,2,28], thus demonstrating that the enumeration choice is invariant (up to a constant).Grammars that generate the context-free languages. These kinds of grammars are extensively used in linguistics. The languages generated by CFG grammars are exactly the languages that can be recognized by a non-deterministic pushdown automaton. We denote this level by CFG. We generated production rules for 40,000 grammars according to a sound scheme introduced in [25].Grammars that generate the context-sensitive languages. The languages described by these grammars are all languages that can be recognized by a linear-bounded automaton (LBA), a Turing machine whose tape’s length is bounded by a constant times the length of the input. An AP-based variation is introduced here, and we denote it by LBA/AP.The least restricted grammar. Generates the languages that can be recognized by Turing machines, also called recursively enumerable languages. This is the level at which Turing-universality is achieved or required. We used previously generated distributions produced and reported in [26,27]We also explore the consequences of relaxing the halting configuration (e.g. halting state) in models of universal computation (Type-0) when it comes to comparing their output distributions.