Explore chapters and articles related to this topic
Text-to-Speech Synthesis
Published in Michael Filimowicz, Foundations in Sound Design for Embedded Media, 2019
The HMM method trains a context-dependent phone-based HMM model where the spectral features are represented not as feature values but as occurring somewhere on a Gaussian distribution of that spectral feature with a certain mean and variance. When dealing with vector data, a multivariate Gaussian needs to be used with a covariance matrix instead of the variance (Taylor 2009). Because speech is so complex, one Gaussian distribution is not sufficient to describe a feature, so they are usually represented as a mixture of Gaussians with a mean and covariance matrix for each Gaussian in the mixture. The representation is called a Gaussian Mixture Model or GMM. The HMM is different from the models used for ASR in that an explicit duration model is trained to define the durations of each state in the HMM. Therefore, it is often called a Hidden Semi-Markov Model (HSMM).
Reliability Indicators for Hidden Markov Renewal Models
Published in Vonta Ilia, Ram Mangey, Reliability Engineering, 2019
Since the underlying process is a Markov chain, the sojourn times (i.e., the times between the successive visited states) are considered to follow the geometric distribution. However, in practice there is no clear evidence that the sojourn times should follow this particular distribution. In other words, there is no evidence that favors the choice of a Markov chain as the underlying chain over the most general semi-Markov chain. If the underlying process is a semi-Markov chain, then the corresponding model is a hidden semi-Markov model (HSMM) and, therefore, HSMMs constitute an important extension of HMMs [4–7].
Machine Learning Basics
Published in Fei Hu, Qi Hao, Intelligent Sensor Networks, 2012
Krasimira Kapitanova, Sang H. Son
A process is considered to be Markov if it exhibits the Markov property, which is the lack of memory, i.e., the conditional probability distribution of future states of the process depends only on the present state, and not on the events that preceded it. We discuss two types of Markov models: hidden Markov model (HMM) and hidden semi-Markov model (HSMM).
A dynamic optimisation approach for a single machine scheduling problem with machine conditions and maintenance decisions
Published in International Journal of Production Research, 2022
Wenhui Yang, Lu Chen, Stèphane Dauzère-Pèrés
In the literature, it has already been discussed that the efficiency of schedules is heavily influenced by the machine condition. However, a deterministic machine deterioration process is assumed in most studies. To define the uncertain machine condition variation, Markov chains are usually considered to model the machine deterioration from the state ‘as good as new’ to the state ‘breakdown’ Neves, Santiago, and Maia (2011). In this case, the machine condition is classified by a finite number of discrete states whose variation is governed by the state transition (Boukas and Liu 2001). The efficiency and effectiveness of the method was demonstrated by successfully applying it to real data (Kim et al. 2011). The numerical results in Khaleghei and Makis (2015) validated the robustness of the Markov model. More precisely, even when the historical repair and failure data are missing, the model can still be effective. Kurt and Kharoufeh (2009) applied a Markovian model in a scheduling problem and proposed an optimal maintenance policy. Liu et al. (2018) used a hidden semi-Markov model to predict the machine condition and to integrate maintenance actions in a single machine scheduling problem.
Survey on frontiers of language and robotics
Published in Advanced Robotics, 2019
T. Taniguchi, D. Mochihashi, T. Nagai, S. Uchida, N. Inoue, I. Kobayashi, T. Nakamura, Y. Hagiwara, N. Iwahashi, T. Inamura
It should be noted that mimicking the trajectories of an expert is not essential for action learning. An important aspect of an action is its function, and one must be able to reproduce this function inherently. However, considering imitation learning by children, it is initially difficult to notice and to imitate the functional aspects of the actions. Instead, by imitating the trajectory, children eventually uncover its functional underpinnings through their own actions. In other words, it appears that it is also key to simulate trajectories initially in imitation learning. The problem currently revolves around the concept of a unit of action. In other words, we must segment continuous actions into meaningful units. From the perspective of mapping to language, segmented actions lead to discrete symbols, which facilitate the connection between the action and a word. To categorize the segmented unit actions, a Gaussian process hidden semi-Markov model (GP-HSMM) was proposed in [165], with the ability to segment time series using the idea of state transitions in a hidden semi-Markov model (HSMM).