Explore chapters and articles related to this topic
Machine Learning Algorithms Used in Medical Field with a Case Study
Published in K. Gayathri Devi, Kishore Balasubramanian, Le Anh Ngoc, Machine Learning and Deep Learning Techniques for Medical Science, 2022
In instance-based learning method, the predictors that are considered to be significant or mandatory to the model for decision making are taken into account. These methods naturally form a record of sample data and balance fresh data using a similarity extent so as to catch the greatest match and make an estimate. Intended for this purpose, instance-based approaches are termed winner-take-all methods. They are also called memory-based learning. Concentration is put on the expression of the stored occurrences and similarity processes used between occasions. The most commonly used instance-based algorithms are:k-Nearest Neighbor (kNN)Self-Organizing Map (SOM)Learning Vector Quantization (LVQ)Support Vector Machines (SVM)Locally Weighted Learning (LWL)
Machine Learning Paradigms
Published in Neeraj Kumar, Aaisha Makkar, Machine Learning in Cognitive IoT, 2020
There are situations when we want to learn directly by analyzing the instances. The creation of rules is a tedious job, and rules creation becomes more complex when the data is large in volume. This kind of learning is known as instance-based learning. The system learns from the instances and represents the results not in the form of a decision tree or rule set but in the sense of instances themselves. It is easy to modify the instances in the data in this learning. The reason behind this is when the learning is rule-based, the new instances should be categorized in one of the rules, but when it is instance-based, the learning is inferred by the instances itself. Whenever the instance is modified or a new one is added then it is compared with the existing instances to search the closest one with the help of distance metric. When a suitable one is found, the new one is placed next to it, forming the nearest-neighbor classification method. What happens when we want to find multiple suitable places for the new instance member? Then the method used is the closest k nearest-neighbors method. The basic idea is the difference, which plays a major role in finding the relationship among the instances. Although it depends upon the number of attributes also. The distance method mainly used is the standard Euclidean distance. This method considers the normalized attributes and their importance as equal. Which attribute is important and which is not; this is still an open research topic. It is biased to compute the distance between the attributes which is done by searching the identical attributes and assigning them zero and one otherwise. Attribute weighting, a distance metric is the basic methods used. It all depends upon the data and its range. The result of this type of learning is that the instances learns itself by discarding the nontrivial and unnecessary instances. The complete framework of instance-based learning is discussed as follows. Learning the different parameters of instances.Finding the relationship among the different instancesSelecting low-priority instances.Discarding the selected the instances.Forming the new data with the new structure.
Averaged one-dependence inverted specific-class distance measure for nominal attributes
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
Fang Gong, Liangxiao Jiang, Dianhong Wang, Xingfeng Guo
Distance functions are used for measuring the similarity between two instances, which are also a key to achieve high performance of many machine learning algorithms, including instance-based learning (Aha, Kibler, & Albert, 1991), self-organising (Kohonen, Schroeder, & Huang, 2001), radial basis functions (Buhmann, 2003) and clustering (Li, Bra¨ysy, Jiang, Wu, & Wang, 2013; Lloyd, 1982). For example, instance-based learning, such as the -nearest neighbour algorithm (KNN), classify a new instance based on its similarity to each training instance. The KNN algorithm searches the training datasets for the most similar instances and assigns the class that has the most vote as a predicted class for the new instance. Due to their importance, distance functions have been widely researched by the machine learning community and been successfully applied to many real-world application domains such as information retrieval, image processing, face recognition and bioinformatics (Bian & Tao, 2012; Hu, Zhan, Wu, Jiang, & Zhou, 2015; Jiang, Li, & Cai, 2009a; Jiang, Li, Zhang, & Cai, 2014; Sanginetio, 2013).
Instances selection algorithm by ensemble margin
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2018
Meryem Saidi, Mohammed El Amine Bechar, Nesma Settouti, Mohamed Amine Chikh
Hybrid methods try to find the smallest subset S which helps to keep or even increase the generalisation accuracy in testing data. It allows the removal of internal and border points. In *Aha1991, Aha et al. present the Instance-Based Learning Algorithms (IB1, IB2, IB3, and IB4), this approach selects the instances misclassified by 1-NN. IB2 starts with S initially empty, and each instance in the training set T is added to S if it is not classified correctly by the instances already in S. IB2 is similar to CNN, except that IB2 does not seed S with one instance of each class and does not repeat the process after the first pass through the training set. This means that IB2 will not necessarily classify all instances in T correctly. IB2 as CNN algorithm is very sensitive to order presentation and noise, because noisy instances will usually be misclassified, and will almost always be saved, while more reliable instances are removed.
Stochastic reservoir operation with data-driven modeling and inflow forecasting
Published in Journal of Applied Water Engineering and Research, 2022
Raul Fontes Santana, Alcigeimes B. Celeste
Typical ISO models employ regression analysis to correlate the data. Unlike most other learning algorithms that globally represent the target function, such as linear or nonlinear regression, instance-based learning algorithms store the training examples and postpone generalization until a new instance is classified. Thus, they fit training examples only in a region around the query point (Fuentes and Solorio 2004).