Explore chapters and articles related to this topic
Supervised and Semi-Supervised Identification of Users and Activities from Wearable Device Brain Signal Recordings
Published in B.K. Tripathy, J. Anuradha, Internet of Things (IoT), 2017
Glavin Wiechert, Matt Triff, Zhixing Liu, Zhicheng Yin, Shuai Zhao, Ziyun Zhong, Runxing Zhou, Pawan Lingras
This chapter also explores the usage of unsupervised and semi-supervised learning techniques with the proposed data representation. Clustering using the K-means algorithm is one of the most popular unsupervised learning techniques. K-medoids is an alternative of K-means that finds an object that is most similar to all the other objects in the cluster, as opposed to determining the centroid of the cluster. These methods focus on optimizing within cluster scatter and separation between the clusters. Both K-means and K-medoids do not provide the capability of incorporating additional optimization criteria. The fact that K-medoids offers a discrete search space, limited by the total number of objects in the dataset, can provide advantages for evolutionary searching. By combining K-medoids with an evolutionary algorithm, it is possible to perform multi-objective clustering. Peters [6] first proposed the use of evolutionary computing in the context of rough set theory, Lingras [7,8] subsequently explored both K-means and K-medoids based on rough set theory.
Rough-fuzzy Clustering
Published in Sankar K. Pal, Pabitra Mitra, Pattern Recognition Algorithms for Data Mining, 2004
CLARANS [184] is a fc-medoids type algorithm which combines sampling techniques with the PAM (partitioning around medoid) method. PAM selects an initial random sample of k medoids, from which it repeatedly tries to make a better choice of medoids. All possible pairs of objects are analyzed, where one object in each pair is considered a medoid and the other is not. The quality of the resulting clustering, measured by squared error, is calculated for each such combination. An object Oj is replaced with the object causing greatest reduction in squared error. The set of best objects for each cluster in one iteration forms the medoids for the next iteration. In CLARA, which is a modified version of PAM, instead of taking the whole set of data into consideration, a small portion of the actual data is chosen as its representative. Medoids are then chosen from this sample using the PAM algorithm.
A Survey of Partitional and Hierarchical Clustering Algorithms
Published in Charu C. Aggarwal, Chandan K. Reddy, Data Clustering, 2018
Chandan K. Reddy, Bhanukiran Vinzamuri
K-medoids is a clustering algorithm which is more resilient to outliers compared to K-means [38]. Similar to K-means, the goal of K-medoids is to find a clustering solution that minimizes a predefined objective function. The K-medoids algorithm chooses the actual data points as the prototypes and is more robust to noise and outliers in the data. The K-medoids algorithm aims to minimize the absolute error criterion rather than the SSE. Similar to the K-means clustering algorithm, the K-medoids algorithm proceeds iteratively until each representative object is actually the medoid of the cluster. The basic K-medoids clustering algorithm is given in Algorithm 14.
An XNOR-ResNet and spatial pyramid pooling-based YOLO v3-tiny algorithm for Monkeypox and similar skin disease detection
Published in The Imaging Science Journal, 2023
The usage of k-medoids clustering gives exponential time complexity where, is the number of image samples in the training set, is the number of iterations, and is the number of clusters. Following the k-medoids clustering mechanism as specified above, we determined the anchors boxes for the created dataset for Monkeypox and other similar skin diseases. Since the number of YOLO detection layers in the proposed XRS-YOLO v3-tiny algorithm is 3, we set the cluster k value to 9 and performed 7,000 iterations to obtain the anchor boxes values. For the specified dataset, we obtained following values for the anchor boxes: [39 × 65, 84 × 116, 152 × 140, 103 × 270, 281 × 176, 179 × 297, 262 × 368, 352 × 299, 392 × 394].
Cluster analysis and multi-level modeling for evaluating the impact of rain on aggressive lane-changing characteristics utilizing naturalistic driving data
Published in Journal of Transportation Safety & Security, 2022
Anik Das, Md Nasim Khan, Mohamed M. Ahmed, Shaun S. Wulff
In contrast to the K-means clustering, the K-medoids method selects representative observations as centers, which are called medoids and observations cluster around those medoids. The K-medoids clustering can be used with random dissimilarity measures. The error function of this method is calculated by the summation of the dissimilarities between each observation and its respective medoid (Mohammadnazar, Arvin, & Khattak, 2021; Park & Jun, 2009), as shown in Equation 4. where denotes cluster and represents the medoid of the cluster Note that several algorithms were proposed for K-medoids clustering; however, this study considered one of the widely used K-medoids algorithms called Partitioning Around Medoids (PAM) (Kaufman & Rousseeuw, 2009).
Clustered tabu search optimization for reservation-based shared autonomous vehicles
Published in Transportation Letters, 2022
Shun Su, Emmanouil Chaniotakis, Santhanakrishnan Narayanan, Hai Jiang, Constantinos Antoniou
Concerning clustering approaches, K–Means is an unsupervised clustering approach, which finds clusters based on observations’ similarity. The approach is an iterative process, consisting of the following steps: (a) selecting cluster centroid, (b) (re)assigning points to the closest centroid, and (c) estimating cluster variation using a distance measure between the cluster points and the centroid. The iteration occurs until there is no significant change in the cluster variations. For detailed information on K–Means clustering, the reader is referred to the work of Forgy (Forgy 1965). Similar to K–Means, K–Medoids is also an unsupervised clustering approach, which is based on the Medoid shift algorithm (Muelas, LaTorre, and Peña 2015). Compared to the K–Means, K–Medoids is more robust to noise and outliers, as it minimizes pairwise dissimilarity distance, rather than squared distances. We perform K–Medoids clustering using Partitioning Around Medoids (PAM) algorithm. PAM is an iterative building and a swapping procedure, based on incremental stepping heuristics. For details about this algorithm, the reader is referred to Muelas et al. (Muelas, LaTorre, and Peña 2015). In this research work, we cluster the original demand (customer requests) based on the earliest pick-up and drop-off time of the requests (each request is represented as a node based on these two). Clustered TS optimization solution procedure is shown in Algorithm 2.