Explore chapters and articles related to this topic
Machine learning classifier for fault classification in photovoltaic system
Published in Rajesh Singh, Anita Gehlot, Intelligent Circuits and Systems, 2021
V.S. Bharath Kurukuru, Mohammed Ali Khan, Ahteshamul Haque, Arun Kumar Tripathi
Machine learning (Hale 1981) is a procedure of data analysis that programs systematic model structure using techniques that iteratively learn from data. These techniques include the decision trees, k-nearest neighbours, non-deterministic classification techniques such as the support vector machine (SVM) and naïve Bayes (NB) classifier, which relies on probability theory. In general, these learning methods can be categorized into two categories: the eager learning method and the lazy learning method. In the eager learning method, the classification model is constructed using labelled training data: the support vector machine is commonly described as an eager learner. On the other hand, in the lazy learning method, the process of model development is delayed and the classification process is performed by observing all the relatively similar training and testing instances. Therefore, K-nearest neighbour is commonly described as a lazy learner. In this paper, SVM classifiers (Chapelle 2007), which are an eager learner, are used to automate the fault classification process due to its training time and pre-calculated algorithm.
Supplier Selection with Machine Learning Algorithms
Published in Turan Paksoy, Çiğdem Koçhan, Sadia Samar Ali, Logistics 4.0, 2020
Mustafa Servet Kιran, Engin Eșme, Belkιz Torğul, Turan Paksoy
The k-Nearest Neighbor (k-NN), which was proposed by Fix and Hodges in 1951, is a sample-based classification algorithm estimating the class of the new observation that is based on known observations in the training set (Fix and Hodges 1989). It is also named as the Lazy Learning Method because there is no training stage in calculating the values of the variables of the method by using the training data, which is the case in some supervised learning algorithms. The k-NN algorithm examines the similarity between the new observation and other observations in the training set. The similarity is found by calculating the metric distance between the new observation, whose class is sought, and the attribute variables of previous observations. The following Figure 16 demonstrates in the two-dimensional space how the similarity is measured between the new sample whose class is sought and the neighbors whose classes are known.
Signature Generation Algorithms for Polymorphic Worms
Published in Mohssen Mohammed, Al-Sakib Khan Pathan, Automatic Defense Against Zero-day Polymorphic Worms in Communication Networks, 2016
Mohssen Mohammed, Al-Sakib Khan Pathan
Instance-based learning is considered another category under the header of statistical methods. Instance-based learning algorithms are lazy-learning algorithms as they delay the induction or generalization process until classification is performed. Lazy-learning algorithms require less computation time during the training phase than eager-learning algorithms (such as decision trees, neural and Bayes nets) but more computation time during the classification process. One of the most straightforward instance-based learning algorithms is the nearest neighbor algorithm.
An integrated machine learning framework for effluent quality prediction in Sewage Treatment Units
Published in Urban Water Journal, 2023
Chandra Sainadh Srungavarapu, Abdul Gaffar Sheik, E. S. S. Tejaswini, Sheik Mohammed Yousuf, Seshagiri Rao Ambati
Also known as lazy learning, this method predicts outputs based on constructing a model from the most similar samples in the neighborhood. In this method, the similar samples are selected by various similarity criteria such as Euclidean distance, exponent functions, etc. Euclidean distance is used in the present work as the criteria, and based on the similarity the samples are selected, Fig. S3 in Supplementary data depicts a schematic of the JIT model. The model is constructed specifically for the new query data based on the similarity criteria and the output is predicted, hence the new model is constructed each time query data is given. This allows the JIT model to adapt to any gradual drifts since each time it’s getting the most similar samples from the database (Xiong et al. 2016; Urhan and Alakent 2020). In this model the distance of 5 is used to select relevant data. The criterion is used according to the following equation:
Big Data Classification Using Enhanced Dynamic KPCA and Convolutional Multi-Layer Bi-LSTM Network
Published in IETE Journal of Research, 2023
Bayesian networks are frequently used to infer a class variable from feature variables for classification tasks. The results also demonstrate that if the class variable offers many different parents and few offspring, exact learning by machine learning (ML) algorithms, this model classification accuracy is substantially worse than that of other approaches. Sugahara et al. [13] suggested exact learning augmented naive Bayes, which ensures a class variable with no parents, employed Markov blanket feature selection to address this issue. Multi-class smooth SVM is the name given to the smooth support vector machine from binary to k-class classification utilizing ternary smooth support vector machine approach that was proposed in [14], predict the k-class classification problem well in terms of accuracy and efficiency. This utilizes nine UCI datasets for numerical experiments to examine the classification accuracy of the Multi-class smooth SVM, one-vs.-one, and one-vs.-rest schemes. The effective lazy learning algorithm K nearest neighbors (kNN) has been successfully developed in practical applications. The kNN approach can easily be scaled to huge datasets and run several experiments using big data and data from medical imaging. The experimental findings demonstrate good performance measures like the accuracy and effectiveness of the suggested kNN classification [15].
An intelligent approach for predicting the strength of geosynthetic-reinforced subgrade soil
Published in International Journal of Pavement Engineering, 2022
Muhammad Nouman Amjad Raja, Sanjay Kumar Shukla, Muhammad Umer Arif Khan
Lazy learning is a ML method in which the majority of the computation time is deferred to the consultation time, that is, until a query (call) is made to the system (Webb et al. 2011). LKS is a lazy learning algorithm which performs the generalisation of the training data using instant base learning classification. This means that unlike the ANN or other ML methods, the predictions are not inferred from particular instances in the training data, instead the complete data are stored in the memory and upon call, the response is produced by the nearest neighbour approach. The K-star sums all the possible transitions between the two instances and amalgamate them into a single class (Cleary and Trigg 1995). This is achieved by summing the probabilities over all possible transformations between the instances. This entropy-based learning technique has several advantages in comparison to other rule-based schemes, such as better handling of missing values and common attributes. Mathematically, the function K-star is defined as follows (Cleary and Trigg 1995, Gao et al. 2019): where represents the probability function, that is, the probability of all the paths from instances p to q.