Explore chapters and articles related to this topic
Quantum Computing Application for Satellites and Satellite Image Processing
Published in Thiruselvan Subramanian, Archana Dhyani, Adarsh Kumar, Sukhpal Singh Gill, Artificial Intelligence, Machine Learning and Blockchain in Quantum Satellite, Drone and Network, 2023
Ajay Kumar, B.S. Tewari, Kamal Pandey
Users may be interested in extracting certain specific features like water bodies, forest cover, built-up area, clouds, roads, rivers, canals, etc. from the satellite image. In all such cases, the feature extraction methods given below are utilised (Sedaghat & Mohammadi, 2018; Pare et al., 2018; Rathore et al., 2016): Uniform Competency Feature ExtractionRepTree, Machine Learning, and Euclidean distanceMulti-image Saliency AnalysisDigital Surface ModelsReversible Jump Markov Chain Monte Carlo Sampler
Extreme Load Analysis
Published in Yu Ding, Data Science for Wind Energy, 2019
The Bayesian MARS model treats the number and locations of the knots as random quantities. When the number of knots changes, the dimension of the parameter space changes with it. To handle a varying dimensionality in the probability distributions in a random sampling procedure, analysts use a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm developed by Green [78]. The acceptance probability for an RJMCMC algorithm includes a Jacobian term, which accounts for the change in dimension. However, under the assumption that the model space for parameters of varying dimension is discrete, there is no need for a Jacobian. In the turbine extreme load analysis, this assumption is satisfied since only are the probable models over possible knot locations and numbers considered. Instead of using the RJMCMC algorithm, Lee et al. [131] use the reversible jump sampler (RJS) algorithm proposed in [46]. Because the RJS algorithm does not require new parameters to match dimensions between models nor the corresponding Jacobian term in the acceptance probability, it is simpler and more efficient to execute.
Model-Based Clustering
Published in Sylvia Frühwirth-Schnatter, Gilles Celeux, Christian P. Robert, Handbook of Mixture Analysis, 2019
Alternatively, within a Bayesian framework, Tadesse et al. (2005) propose using reversible jump Markov chain Monte Carlo (MCMC) methods in Gaussian mixture modelling to move between mixture models with different numbers of components while variable selection is accomplished by stochastic search through the model space. In the context of infinite mixtures, Kim et al. (2006) combine stochastic search for cluster-relevant variables with a Dirichlet process prior on the mixture weights to estimate the number of components. White et al. (2016) suggest using collapsed Gibbs sampling in the context of latent class analysis to perform inference on the number of clusters as well as the usefulness of the variables.
Flexible methods for reliability estimation using aggregate failure-time data
Published in IISE Transactions, 2021
Samira Karimi, Haitao Liao, Neng Fan
A technical challenge of using PH distributions is model parameter estimation. Asmussen et al. (1996) developed an EM algorithm to obtain the MLE, of model parameters. They also used the EM algorithm to minimize information divergence in a density approximation. As the EM algorithm is computationally intensive, Okamura et al. (2011) proposed a refined EM algorithm to reduce the computational time using uniformization and an improved forward-backward algorithm. As an alternative, under the framework of Bayesian statistics, Bladt et al. (2003) used a Markov Chain Monte Carlo (MCMC) method combined with Gibbs sampling for general PH distributions. Watanabe et al. (2012) also presented an MCMC approach to fit PH distributions while using uniformization and backward likelihood computation to reduce the computational time. Ausín et al. (2008) and McGrory et al. (2009) explored two special cases of PH distributions (i.e., Erlang and Coxian) through a Reversible Jump Markov chain Monte Carlo (RJMCMC) method. Yamaguchi et al. (2010), and Okamura et al. (2014) presented variational Bayesian methods to improve the computational efficiency of PH estimation in comparison with MCMC. It is worth pointing out that all of these estimation methods were not developed for aggregate data. In this article, efforts will be focused on developing a collection of new MLE and Bayesian methods for the analysis of aggregate failure-time data.
Spatial Statistical Downscaling for Constructing High-Resolution Nature Runs in Global Observing System Simulation Experiments
Published in Technometrics, 2019
Pulong Ma, Emily L. Kang, Amy J. Braverman, Hai M. Nguyen
Our basis function selection method differs from that of Tzeng and Huang (2017) in that our method is designed to learn nonstationary and localized features from the data. Tzeng and Huang (2017) use information about data locations, but not data values, to specify basis functions. Other methods for nonstationary spatial modeling such as Katzfuss (2013) and Konomi, Sang, and Mallick (2014) require computationally intensive reversible jump Markov chain Monte Carlo methods. In contrast, the forward stepwise algorithm we propose is simpler, more intuitive, and well-suited for parallel computing environments. The spatial downscaling procedure is also computationally efficient, and can produce not just one but many high-resolution statistical replicates from a coarse-resolution spatial field because it is based on conditional simulation.
Reconstruction of refractive index maps using photogrammetry
Published in Inverse Problems in Science and Engineering, 2021
A. Miller, A.J. Mulholland, K.M.M. Tant, S.G. Pierce, B. Hughes, A.B. Forbes
To sample the model space and construct an approximation of the posterior distribution, we employ the reversible jump Markov Chain Monte Carlo (rj-MCMC) [37] method for the optimization step. The rj-MCMC is a stochastic iterative process used to create samples from the posterior distribution which is the unknown probability distribution describing the likelihood of each Voronoi tessellation being the reconstructed refractive index map. The approach relies on Bayes' Theorem [38] which leads to where is a probability density function representing the prior knowledge of the model . The likelihood of observing the data given a particular model is and this is proportional to where is the noise parameter and (measured in degrees) is the misfit function defined in Equation (4). The posterior distribution which describes the probability of being the refractive index map given the data is denoted by . The likelihood function is used to calculate .