Explore chapters and articles related to this topic
Basic Approaches of Artificial Intelligence and Machine Learning in Thermal Image Processing
Published in U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer, Artificial Intelligence-Based Infrared Thermal Image Processing and Its Applications, 2023
U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer
Mean shift clustering algorithm assigns each pixel in the image to a cluster centroid. The concept of kernel density estimation is used in this algorithm. The direction in which the closest centroid is selected depends upon which direction has the greatest number of points nearby. In each iteration, the pixels will move closer to the direction where the most data points are located, which finally leads to the cluster center. At the end of the algorithm, each pixel in the image will have been assigned to a cluster. The advantage of mean shift clustering is that the number of clusters does not need to be specified beforehand. The number of clusters is determined by the algorithm with respect to the data.
Machine Learning
Published in Michael Ljungberg, Handbook of Nuclear Medicine and Molecular Imaging for Physicists, 2022
The training data, as shown in Figure 11.1 can be used to estimate the probability density function (the likelihood) fX(x|Y = 1), that is, how probable it is that the petal length is x, given that the flower is of species y = 1 (Iris Setosa). This is illustrated in the top part of Figure 11.1, where such measurements are shown. There are many ways to estimate probability density functions, for example, (i) using histograms, (ii) parametric probability density estimation, and (iii) non-parametric probability density estimation. In this example we use a non-parametric method called kernel density estimation, see [9, 10]. The method is based on choosing a kernel, often a Gaussian kernel,
Image Segmentation with B-Spline Level Set
Published in Ayman El-Baz, Jasjit S. Suri, Level Set Method in Medical Imaging Segmentation, 2019
Shenhai Zheng, Bin Fang, Laquan Li
Let and are the set of aligned training samples and an evolving level-set function, respectively. The kernel density estimation of the is given as [36]:
Analysis of batched service time data using Gaussian and semi-parametric kernel models
Published in Journal of Applied Statistics, 2020
Xueying Wang, Chunxiao Zhou, Kepher Makambi, Ao Yuan, Jaeil Ahn
In this section, we introduce kernel density estimation. To implement the parameter 5,22,26], given as 25]),
Parallel Markov chain Monte Carlo for Bayesian hierarchical models with big data, in two stages
Published in Journal of Applied Statistics, 2019
The two-stage method has several advantages over the existing communication-free parallel computing methods of Neiswanger et al. [28] and Scott et al. [38] that divide the full data set by observations rather than by groups. As shown in the examples, our method is appropriate for both Gaussian and non-Gaussian posterior distributions, unlike these existing methods that are best suited to Gaussian posteriors. The two-stage method also does not split the prior distributions of the full model into parts for the subset analyses, avoiding any impropriety of subset priors and resulting subset posteriors. In addition, unlike the kernel density estimation technique of Neiswanger et al. [28], our procedure is not limited by the dimension of the number of groups. This is due to the groups being processed independently in stage 1, and then estimated one-by-one in stage 2 through the Metropolis-Hastings step. The two-stage method is also flexible in that there can be different numbers of MCMC samples drawn in each group in stage 1; this may be necessary if, for example, some groups require more iterations to converge than others. In addition, the parallel computing in stage 1 is particularly applicable for distributed computing frameworks such as MapReduce [10], and can be run on multi-core processors and networks of machines.
Bayesian selector of adaptive bandwidth for multivariate gamma kernel estimator on [0,∞ ) d
Published in Journal of Applied Statistics, 2022
Sobom M. Somé, Célestin C. Kokonendji
Multivariate continuous (nonnegative orthant) datasets are abundant in nature and deserve to be dealt with quickly by a nonparametric approach. Kernel density estimation is a popular and widely used nonparametric method for data-driven density estimation. In order to estimate a probability density function (pdf) of d-variate nonnegative datasets with partially bounded support such as 18]) are more suitable than the well-known symmetric ones (e.g. [27]; see Appendix for a brief generality). Several works have been introduced in the literature for distributions on support 7], inverse gamma [20], generalized gamma [13], inverse and reciprocal inverse Gaussian [26], lognormal [15,20], Birnbaum–Saunders [15], generalized Birnbaum–Saunders [22], and Weibull [25]. One also refers, for instance, to [12,14,20,21] for reduction of the additional quantity in the bias of the corresponding associated kernel estimators. Among all these asymmetric associated kernels on 7] appears to be the oldest, well-known and most used of all. Also, it is so popular and points out some flexibility for the practice in several domains such as reliability and finance (see, e.g. [13,20] for some derivatives and (graphical) comparisons with respect to the mode and dispersion around the target). Our current investigation on the basic gamma kernel may be extended to other associated kernels on 18]. We shall consider the Bayesian selector of adaptive bandwidth which is generally preferred to local and global ones and explained below.