Explore chapters and articles related to this topic
ChIP-seq analysis
Published in Altuna Akalin, Computational Genomics with R, 2020
The choice of downstream analysis is guided by the biological question of interest. Often we want to compare our samples to other available ChIP-seq experiments. It is possible to look at the pairwise differences between samples using differential peak calling (Zhang et al., 2014; Lun and Smyth, 2014; Allhoff et al., 2014, 2016). It is a procedure analogous to the differential expression analysis, except it results in sets of coordinates that are differentially bound in two biological conditions. We can then search for a specific DNA binding motif in such regions, or correlate changes in the binding with changes in gene expression. With an increase in the number of ChIP experiments, pairwise comparisons become combinatorially complex. In this case we can segment the genome into multiple classes, where each class corresponds to a combination of bound transcription factors. Genome segmentation is usually done using probabilistic models (such as hidden Markov models (Ernst and Kellis, 2012; Hoffman et al., 2012)), or machine learning algorithms (Mortazavi et al., 2013).
Intelligent Data Analysis Techniques
Published in Arvind Kumar Bansal, Javed Iqbal Khan, S. Kaisar Alam, Introduction to Computational Health Informatics, 2019
Arvind Kumar Bansal, Javed Iqbal Khan, S. Kaisar Alam
HMM (Hidden Markov Model) is a variation of first-order Markov process where the state of the machine is predicted probabilistically by the evidence. This is due to many-to-many mappings between the internal states and the evidence signals. HMM has many applications in medical diagnosis, text analysis, speech understanding, natural language understanding, ECG analysis to detect different waves, gene-detection, recovery analysis and disease detection.
Brain–Computer Interface
Published in Chang S. Nam, Anton Nijholt, Fabien Lotte, Brain–Computer Interfaces Handbook, 2018
Chang S. Nam, Inchul Choi, Amy Wadeson, Mincheol Whang
A hidden Markov model (HMM) can be thought of as a bivariate stochastic process in which the most likely hidden state sequence that produces a given sequence of observations can be found using, for example, the well-known Viterbi algorithm (Hernando et al. 2005). Under an HMM, there are two basic assumptions:
Body activity grading strategy for cervical rehabilitation training
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2023
Hidden Markov model (HMM) is another promising mathematical framework. An earlier research on HMM-based activity analysis is reported in Vogler and Metaxas (1998), where arm movements are identified and converted to sign language. Following this work, HMM-based activity recognition is investigated in various researches as reviewed in Attal et al. (2015; Ionut-Cristian and Dan-Marius 2021). HMMs with Gaussian distribution and Poisson distribution are proposed to conduct bout detection for different activities (Vitali et al. 2014). HMM with Gaussian mixture is applied to model and recognize 24 physical activities and achieves classification accuracy of over 90% (Dutta et al. 2018). Besides, data fusion is employed in HMM classifiers for better recognition result (Ward et al. 2006; Cheng et al. 2013). It seems that HMM can be tailored to handle different activity recognition tasks.
Estimating the earthquake occurrence rates in Corinth Gulf (Greece) through Markovian arrival process modeling
Published in Journal of Applied Statistics, 2019
P. Bountzis, E. Papadimitriou, G. Tsaklidis
An extension of Markov and renewal models, the semi-Markov model (SMM), allows the consideration of dependencies on both the magnitude and the elapsed time of the last event. A SMM in continuous time was studied by Votsi et al. [45] for the temporal variation of seismicity in the Northern Aegean Sea, Greece. The main feature of the Hidden Markov Models (HMMs) introduced by Baum and Petrie [4] is the set of the hidden states of the underlying Markov process, which are very essential when dealing with unobservable data. Recently, a keen interest in applying HMMs in Seismology has been raised. Orfanogiannaki et al. [37] applied a Poisson HMM (PHMM) to estimate the seismicity rates in the area of Ionian Sea, Greece, for the period 1900-2006. The PHMM revealed changes of seismicity and recognized earthquake clusters. Votsi et al. [46] applied the HMMs for identifying the unobservable stress level controlling the evolution of the strong (47] applied for the first time a discrete time hidden semi-Markov model (HSMM), by providing a statistical estimator of the intensity function. Pertsinidou et al. [38] extended the work of Votsi et al. [47] assuming different emission and jump times of the HSMM. Wang et al. [49] used a HMM, where each state corresponds to a distinct segment of the Tokai fault zone (Japan) and revealed a spatiotemporal transition pattern between the segments.
Automated speech analysis tools for children’s speech production: A systematic literature review
Published in International Journal of Speech-Language Pathology, 2018
J. McKechnie, B. Ahmed, R. Gutierrez-Osuna, P. Monroe, P. McCabe, K. J. Ballard
In the 1960s and 70s, the earliest ASA systems were able to process isolated words from small to medium pre-defined vocabularies using acoustic phonetics to perform: time alignment; template-based pattern recognition; or matching of the incoming speech signal with the stored reference production (Kurian, 2014). The inherent variability of the speech signal introduced by vocal tract variations across speakers and temporal variability across repeated productions of the same word affected recognition accuracy. In the 1970s, linear predictive coding (LPC) was introduced, which could account for some of the individual variation caused by vocal tract differences (Kurian, 2014). In the 1980s, ASA tools became better able to process larger vocabularies and continuous speech, driven by the development of technology based on statistical modelling of probability that a particular set of language symbols (i.e. either phoneme sequences or word sequences) was a match to the incoming speech signal (Kurian, 2014). These systems are more robust to variations across speaker (e.g. pronunciation or accent) and environmental noise as well as temporal variations in the speech signal (Kurian, 2014). Hidden Markov models (HMMs), which perform temporal pattern recognition, are now the predominant technology behind speech recognition systems. Also in the 1990s, new innovations in pattern recognition led to discriminative training and kernel-based techniques such as Support Vector Machines (SVMs) which functioned as classifiers. Figure 1 presents a model of the component processes involved in modern ASA systems (also see Keshet, in press, in this issue; and Shaikh and Deshmukh, 2016).