Explore chapters and articles related to this topic
Basic Approaches of Artificial Intelligence and Machine Learning in Thermal Image Processing
Published in U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer, Artificial Intelligence-Based Infrared Thermal Image Processing and Its Applications, 2023
U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer
Cluster-based image segmentation involves the grouping of the image pixels into groups also known as clusters (Mittal et al., 2021). The pixels in each group are as similar to each other as possible, while pixels belonging to different groups are as dissimilar to each other as possible. Also, the distance between cluster centers is also kept as far as possible. The common principle behind the different clustering algorithms is that initially, the number of clusters is to be specified by the user. Next, the centroid is calculated, and the pixels are allotted to each cluster accordingly. More iterations are performed to the centroid calculation until it no longer changes, i.e., the algorithm converges. Some of the important and frequently used algorithms in medical image processing are explained as follows.
Sjögren's Disease
Published in Jason Liebowitz, Philip Seo, David Hellmann, Michael Zeide, Clinical Innovation in Rheumatology, 2023
HCQ is commonly used for management of select SjD manifestations, including inflammatory musculoskeletal pain, fatigue, and autoimmune rashes (167, 183, 184). It interferes with toll-like receptor signaling, thus inhibiting type I IFN pathways (185). The evidence supporting its use in SjD, however, is limited. The JOQUER trial was the first large placebo-controlled trial to assess its efficacy in SjD (152). This twenty-four-week trial did not meet its primary end point, defined as 30% or greater improvement in two out of three visual analogue scales for dryness, pain, and fatigue. Post hoc reanalysis of the data, using a novel symptom-based patient clustering algorithm, suggested potential benefit of HCQ in one subgroup (186). In a retrospective study, a decrease in damage accrual was evident in the SjD patients who used HCQ (187).
Advances in Big Data and Machine Learning in Cancer Detection in Women-Associated Cancers
Published in Shazia Rashid, Ankur Saxena, Sabia Rashid, Latest Advances in Diagnosis and Treatment of Women-Associated Cancers, 2022
Dhaval Kumar Srivastava, Aditya Vikram Singh, Ankur Saxena
The investigation of the breast cancer data from the Wisconsin dataset from UCI machine learning to develop accurate prediction models for breast cancer using data mining techniques led to the comparison of three classification techniques utilizing Weka software, wherein the results demonstrated that sequential minimal optimization (SMO) attained a prediction accuracy of 96.2%, higher than that of the LBK and BF Tree methods [3]. Another study has been implemented on medical data from the WDBC directory containing 569 instances and 32 features, where they used feature-selection mechanism to minimize the number of features and utilized K-means clustering algorithm to divide tumours into clusters. Subsequently, the application of hybrid K-SVM model reduced the computational time significantly and obtained a higher accuracy of 97.38% [4]. The impact of gene expression (GE) and DNA methylation (DM) has also been reported, focusing on the study of prediction breast cancer by utilizing genetic data of the patients, where the use of SVM classifier manifested to be potent in breast cancer prediction, as it attained best results in terms of accuracy (96.33%) and precision (97.2%) [5].
Harnessing machine learning for development of microbiome therapeutics
Published in Gut Microbes, 2021
Laura E. McCoubrey, Moe Elbadawi, Mine Orlu, Simon Gaisford, Abdul W. Basit
Compared to supervised methods, unsupervised learning does not address any pre-defined questions.22 At all stages of data mining and ML, the chance of bias should be reduced as much as possible. One could say that introducing a question to an algorithm leads to bias, as the algorithm will look to solve that particular problem. In unsupervised learning, an ML algorithm works to identify patterns in data without any prior operator input. This can subsequently lead to elements being identified that could not be conceived by the operator. Unsupervised ML methods can produce clustering or association outputs. Clustering algorithms identify distinct groups within data; association algorithms output rules found within data. Common unsupervised ML techniques include k-means clustering, principal component analysis, and k nearest neighbors.
Pharmacological management of adult patients with acute respiratory distress syndrome
Published in Expert Opinion on Pharmacotherapy, 2020
Maria Gabriella Matera, Paola Rogliani, Andrea Bianco, Mario Cazzola
To overcome these issues, a precision approach for ARDS, whereby therapies are specifically targeted to patients most likely to benefit, has been proposed [145]. Sinha and Calfee suggested the possibility of identifying homogenous subgroups and phenotypes in ARDS, each characterized by specific clinical characteristics, outcomes, and response patterns [146]. However, the rationale for hyper-inflammatory or hypo-inflammatory ARDS phenotypes is speculative and still there is not a single trial supporting a drug response in relation to stratification of ARDS into those two phenotypes. Moreover, identification of homogeneous subgroups or phenotypes in ARDS is not easy. However, there is a growing interest in identifying ARDS phenotypes, understanding what are the mechanisms that regulate them and looking for any responsive treatable treat [147]. In particular, the use of clustering algorithms in critical care research could help to solve heterogeneity of patients [148].
A comparative analysis of clustering algorithms to identify the homogeneous rainfall gauge stations of Bangladesh
Published in Journal of Applied Statistics, 2020
Mohammad Samsul Alam, Sangita Paul
The monthly rainfall observed by BMD from different stations are utilized in this study to get annual, pre-monsoon, monsoon and post-monsoon rainfall. For each of the stations, annual precipitation is calculated by summing the monthly rainfall over the months January to December. In a similar fashion, for each station, the pre-monsoon, monsoon and post-monsoon precipitations are computed by summing the monthly rainfall over the months February to May, June to September and October to January, respectively. Data, for different precipitation series, were constructed over the time span 1977–2012 which yields 36 observations for each of the stations. Therefore, in every situation, a clustering algorithm is applied over a data of dimension