Explore chapters and articles related to this topic
Fundamental Materials and Tools
Published in Wing C. Kwong, Guu-Chang Yang, Optical Coding Theory with Prime, 2018
Containing a finite number of states, a Markov chain is a discrete random process with a (memoryless) Markov property. The Markov property states that the conditional probability distributions of states are all independent and the conditional probability of next state depends only on the present state. Let random variables {X0, X1, X2,…, Xi} represent the independent probability distributions of i + 1 states, labeled with distinct integers ranging from 0 to i. The Markov property implies that Pr(Xi = xi | Xi−1 = Xi−1, Xi−2 = Xi−2,…, X0 = x0) = Pr(Xi = xi | Xi−1 = xi−1).
Markov Chain Models and Hidden Markov Models
Published in Nong Ye, Data Mining, 2013
A Markov chain model describes the first-order discrete-time stochastic process of a system with the Markov property that the probability the system state at time n does not depend on previous system states leading to the system state at time n − 1 but only the system state at n − 1: ()P(sn|sn−1,…,s1)=P(sn|sn−1)foralln,
Efficiency analysis of prototype evolution methodology on collaborative design
Published in Paulo Jorge Bártolo, Artur Jorge Mateus, Fernando da Conceição Batista, Henrique Amorim Almeida, João Manuel Matias, Joel Correia Vasco, Jorge Brites Gaspar, Mário António Correia, Nuno Carpinteiro André, Nuno Fernandes Alves, Paulo Parente Novo, Pedro Gonçalves Martinho, Rui Adriano Carvalho, Virtual and Rapid Manufacturing, 2007
Product design process is a cycle of produced and satisfied requirements. It begins by the generation of original requirements, and ends with the satisfaction of most requirements. This is a Birth-death process, a special case of Markov chain, which has Markov property, and only permits the system state changing between neighboring states, or be kept unchanged (figure 2). In figure 2, the ellipses represent different system states, the forward arrows represent birth probabilities, and the backward arrows represent death probabilities. The Markov property means that no memory of system states remains and the future state of a system is only determined by the current state and has no relationship with its past. If all states of every design stage are defined, a standard Markov property comes to reality. In fact, although the local properties of state transformations in a controlled design process are similar to Markov property, it is impossible to define all the system states; therefore, the birth-death process is only an acceptable approximation for design process in most cases.
Unmanned aerial vehicles using machine learning for autonomous flight; state-of-the-art
Published in Advanced Robotics, 2019
Reinforcement learning (RL) is interested in finding the optimal policy which is learned from feedback. This kind of feedback is called reward or reinforcement [41]. It repeats trial and error until the agent chooses which of the actions is appropriate to maximize the expected total reward. Therefore, the task of reinforcement learning is to learn an optimal policy for the environment by using observed rewards [41]. Classical reinforcement learning approaches are based on the assumption that it has Markov Decision Process (MDP) consisting of the set of states S, set of actions A, the rewards R, and transition probabilities T that capture the dynamics of a system. describes the effects of actions on the state. The Markov property requires that the next state and the reward only depend on the previous state s and action a, and not on additional information about the past states or actions. Therefore, a state is considered as a sufficient statistic for predicting the future, rendering previous observations irrelevant [45–47].
A new belief Markov chain model and its application in inventory prediction
Published in International Journal of Production Research, 2018
A Markov process can be used to model a random system that changes states according to a transition rule that only depends on the current state. The Markov property is the most important property during Markov process, namely the conditional probability distribution of the future states of the process depends only upon the present state (Komorowski and Szarek 2010; Werner 2016). Markov chain is a stochastic process with Markov property (Sanz-Serna 2014). It is widely used in many applications, including mathematical physical (Farahat 2010; Navaei 2010), decision making (Alagoz, Hsu HSchaefer, and Roberts 2010), economy analysis (Annett, Sabine, and Helge 2010), quality testing (Liao 2016), military application (Zais and Zhang 2016). Especially, Markov chains are broadly used in the prediction (Samet and Mojallal 2014), such as rainfall prediction (Chaudhuri et al. 2014; Liu, Tian, and Wang 2011), economy prediction (Jin et al. 2015; Mar et al. 2010) and so on.
Risk-based inspection planning of deteriorating structures
Published in Structure and Infrastructure Engineering, 2021
David Y. Yang, Dan M. Frangopol
Since regular inspection is considered, analysis does not need to be conducted annually. Instead, it is performed based on inspection intervals. This can greatly improve the computational efficiency. To obtain the transition probabilities within the inspection interval, Markov property can be utilized again. For instance, the transition matrices in Case I can be expressed as:where and are the 10-year transition matrices without and with the maintenance action in year 10, respectively.