Explore chapters and articles related to this topic
Nearest Neighbors
Published in Jan Žižka, František Dařena, Arnošt Svoboda, Text Mining with Machine Learning, 2019
Jan Žižka, František Dařena, Arnošt Svoboda
Manhattan distance: The distance between two points is the sum of the absolute differences of their Cartesian coordinates. This distance dM, named after the layout of most streets on the island of Manhattan (a borough of New York City), is an alternative to the Euclidean distance when it is possible to move in the space only in parallel with mutually perpendicular axes: dM(A,B)=|a1−b1|+|a2−b2|+...+|an−bn|=∑i=1n|ai−bi|
Mathematical Tools for Image Processing
Published in Vipin Tyagi, Understanding Digital Image Processing, 2018
Apart from Euclidean, another popular distance metric is Manhattan distance, also known as ‘taxi-cab’ distance or ‘city block’ distance, defined as: D4((x1,y1),(x2,y2))=|x1-x2|+|y1-y2| $$ D_{4} ((x_{1} ,~y_{1} ),~(x_{2} ,~y_{2} )) = |x_{1} - x_{2} | + |y_{1} - y_{2} | $$
Analysis of RNNs and Different ML and DL Classifiers on Speech-Based Emotion Recognition System Using Linear and Nonlinear Features
Published in Amit Kumar Tyagi, Ajith Abraham, Recurrent Neural Networks, 2023
Shivesh Jha, Sanay Shah, Raj Ghamsani, Preet Sanghavi, Narendra M. Shekokar
In K-NN, the class of the object is decided based on majority votes by its neighbor. In K-NN classifier, the distance of the data point is calculated with every other known data point. Various distance metrics such as euclidean distance and manhattan distance are used to find the nearest data points. Among all the data points the nearest K-neighbor is selected and the data point is assigned the label similar to the majority of the K-selected data points. K-NN is a nonparametric model that makes it suitable for classification where input data has lot of preprocessing work (Figure 7.6).
Electric vehicle charging demand forecasting using deep learning model
Published in Journal of Intelligent Transportation Systems, 2022
Zhiyan Yi, Xiaoyue Cathy Liu, Ran Wei, Xi Chen, Jiangpeng Dai
Clustering is a technique that divides the objects into a set of groups such that objects in the same group are more related to each other than to those in other groups. Clustering algorithm can be mainly categorized into partition-based algorithms, hierarchical-based algorithms, density-based algorithms, and grid-based algorithms (Mann & Kaur, 2013). Among them, hierarchical-based clustering is computationally efficient, which is characterized by greedy strategy. It has been widely used for spatial clustering (Liu & Lin, 2019; Zhang et al., 2019; Zhuang et al., 2020). In this project, we apply agglomerative clustering (Beeferman & Berger, 2000), one of the hierarchical clustering algorithms, to achieve this goal. Agglomerative clustering is a bottom-up clustering method, where each observation is regarded as a cluster and pairs of clusters are merged as one moves up the hierarchy. To measure the similarity, there are numerous distance metrics. Among them, Euclidean distance and Manhattan distance are commonly used for measuring spatial clustering. The Euclidean distance computes the root of square difference between coordinates of pair of objects, while Manhattan distance computes the absolute difference between coordinates of pair of objects. In general, results obtained by applying the aforementioned two metrics are very similar (Grabusts, 2011). For simplicity, Euclidean distance is adopted in this article for similarity measurement.
Spectrum Sensing and Resource Allocation for Proficient Transmission in Cognitive Radio with 5G
Published in IETE Journal of Research, 2022
Step 2: Randomly initialize number of centroids as follows: Step 3: Assign SUs to closest centroid based on distance. Here Manhattan distance, which gives absolute distance, is determined between SUs and centroids. We use Manhattan distance instead of Euclidean distance since Euclidean distance is influenced by unusual values and not able to provide accurate distance. In order to obtain accurate distance between centroids and SUs, Manhattan distance is computed. Accurate distance computation leads to robust result in group formation. Manhattan distance between SU and centroid is computed as follows: Here represents points of SU and represents points of centroid. Involvement of Manhattan distance also minimizes the measurement errors which lead to accurate distance calculation. Then the SU assignment process follows: This process is iterated until all SUs are assigned to at least one centroid.
Parameter identification for a three-dimensional aerofoil system considering uncertainty by an enhanced Jaya algorithm
Published in Engineering Optimization, 2022
Z. H. Ding, Z. R. Lu, F. X. Chen
Clustering categorizes data into several different types according to their characteristics, and is a useful tool for discovering the inherent structure in any given data set. A clustering centre can represent the whole cluster, since its generation represents the information of other individuals in this colony (Cai et al.2011; Ding, Li et al.2019). Among these clustering approaches, the k-means clustering strategy, owing to its ease of operation and effectiveness, is gaining attention. The details of how to conduct the k-means clustering strategy can be found in Cai et al. (2011), Chen et al. (2018), Ding, Li, and Hao (2019) and Ding, Li et al. (2019). As mentioned already, in this article, the k-means clustering strategy is introduced before solutions are updated. The ‘Manhattan distance’ is used to determine the distance between different solutions during the k-means clustering process. The Manhattan distance is a simplified version of the Euclidean distance, and adopting it for calculations is more convenient and time saving.