Explore chapters and articles related to this topic
Basic Approaches of Artificial Intelligence and Machine Learning in Thermal Image Processing
Published in U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer, Artificial Intelligence-Based Infrared Thermal Image Processing and Its Applications, 2023
U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer
Mean shift clustering algorithm assigns each pixel in the image to a cluster centroid. The concept of kernel density estimation is used in this algorithm. The direction in which the closest centroid is selected depends upon which direction has the greatest number of points nearby. In each iteration, the pixels will move closer to the direction where the most data points are located, which finally leads to the cluster center. At the end of the algorithm, each pixel in the image will have been assigned to a cluster. The advantage of mean shift clustering is that the number of clusters does not need to be specified beforehand. The number of clusters is determined by the algorithm with respect to the data.
Machine Learning
Published in Michael Ljungberg, Handbook of Nuclear Medicine and Molecular Imaging for Physicists, 2022
where the scale/width h is a so-called hyper-parameter. A larger h corresponds to more smoothing and regularization, which makes the estimate less sensitive to noise, but makes it difficult to capture details of the probability density. On the other hand, using a small h makes it possible to capture more details in the probability density estimation, but the method then becomes sensitive to noise. Typically, it would also need more training data.
Survival Analysis
Published in Jianrong Wu, Statistical Methods for Survival Trial Design, 2018
where is an estimate of the density function f, and is given by Greenwood’s formula (2.3) at . To use this asymptotic variance formula, we have to estimate the density function f. A common type of density estimation is a window estimate obtained by using a kernel function. For example, Kosorok (1999) proposed an optimal window estimate based on the kernel function
Analysis of batched service time data using Gaussian and semi-parametric kernel models
Published in Journal of Applied Statistics, 2020
Xueying Wang, Chunxiao Zhou, Kepher Makambi, Ao Yuan, Jaeil Ahn
The choice of a kernel is not of particular importance [11,29]. Studies suggest that most density estimation methods demonstrate similar results when using different kernels, and the selection of kernel is typically driven by computational convenience ([28,29]). Due to its convenient mathematical properties, we use the normal kernel which is commonly used in literature [17]. In contrast to the choice of kernel function, the choice of bandwidth is crucial in density estimation [29]. It adjusts the smoothness and details of the curve. As the bandwidth decrease to zero, the number of modes increases to that of data points, and the density is very noisy; when the bandwidth is sufficiently large, the number of modes drops to one and any detailed features are smoothed away. So when choosing bandwidth, we should balance the pros and cons between smoothing curve and losing detail. There are various methods for this topic and interesting proposals can be found in Fan and Gijbels [12]. For example, [10] proposed an optimal bandwidth version where the bandwidth is determined to minimize the mean integrated squared error which we choose to use in our case 7], Ahmad and Amezziane [1] and others.
A new look at the inverse Gaussian distribution with applications to insurance and economic data
Published in Journal of Applied Statistics, 2019
Due to their conceptual simplicity and practical and theoretical properties, kernel smoothers are one of the most popular statistical methods for nonparametric density estimation. Given the random sample n (usually symmetric) ‘bumps’ (the so-called kernels), with equal weights 1/n, placed over each observation. A classical example, the default of various softwares/packages implementing this approach, are the Gaussian kernels. However, as stressed by Chen [13], Gaussian kernels (defined on the whole real line) produce an estimated density function which allocates probability mass outside the theoretical support 13] proposes a nice and natural remedy consisting in using kernels defined on the same support of the unknown density f, so that no probability mass is assigned outside.
Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning
Published in Acta Oncologica, 2018
Yu Sun, Hayley M. Reynolds, Darren Wraith, Scott Williams, Mary E. Finnegan, Catherine Mitchell, Declan Murphy, Annette Haworth
Accurate estimations of cell density in solid tumours can facilitate tissue classification [1], optimal treatment option selection [2] and treatment response prediction [3]. Cell density estimation in several cancer types has been previously reported. Lyng et al. investigated the use of ADC maps from DWI in measuring the cell density in melanoma [4] to predict chemotherapy efficacy. The result suggested that the methods could be applicable in some but not all lesions [4]. Similarly, Gauvain et al. investigated the relationship of diffusion tensor imaging (DTI) and cell density in brain tumours for tissue classification and hence tumour grading [5]. Pichardo et al. developed a method to measure bone marrow cell density using MRI (multi-echo spoiled gradient-recalled imaging, MSGRI) and MR spectroscopy [6] for predicting bone marrow toxicity following whole body irradiation. Atuegwu et al. combined DWI data with a mathematical model to determine breast cancer cell density to predict treatment outcomes. The prediction achieved a Pearson correlation at 0.70 (p < .05) [7]. Despite these studies, there are currently no methods for quantitative estimation of cell density in prostate cancer.