Explore chapters and articles related to this topic
Performance of Diverse Machine Learning Algorithms for Heart Disease Prognosis
Published in Ayodeji Olalekan Salau, Shruti Jain, Meenakshi Sood, Computational Intelligence and Data Sciences, 2022
Dhruv Kaliraman, Gauri Kamath, Suchitra Khoje, Prajakta Pardeshi
AdaBoost is an algorithm that can be used in combination with different types of ML algorithms to enhance the performance of the model altogether. A “weak” learner is an algorithm that performs subpar—its performance is only beyond chance, but just by a small margin. A strong ensemble classifier is generated by combining the instances of the algorithm using various techniques such as bagging and boosting. Other learning algorithms’ outputs (“weak learners”) are compiled into a weighted summation that reflects the boosted classifier’s ultimate result. Exceptional results are obtained when AdaBoost is combined with decision trees as the weak learner algorithm. AdaBoost aims at enhancing the predictive ability of the model, reducing computation time and dimensionality, and eliminating unrelated features. So, it selects only the features that make a significant contribution to increasing the accuracy as well as the efficiency of the models. This algorithm follows a methodology in which a model is first created from the training data and then a second model is generated that rectifies the errors of the previous model. Before the training range is perfectly estimated or before the maximum number of models is met, models are added. The weights are reassigned to each case, with higher weights allocated to instances that were incorrectly labeled. This is called adaptive boosting. After applying standardization and PCA, tenfold cross-validation was applied to improve the prediction accuracy and the maximum average accuracy achieved was 84.03%, as shown in Figure 1.8 (Table 1.6).
Precast segmental bridge construction in seismic zones
Published in Fabio Biondini, Dan M. Frangopol, Bridge Maintenance, Safety, Management, Resilience and Sustainability, 2012
Fabio Biondini, Dan M. Frangopol
Boosting is a structure learning algorithm that has high precision (Strong learning algorithms) by combining the learning algorithms that doesn’t have high precision (Weak learning algorithm). And, AdaBoost is the one of the Boosting methods. AdaBoost is used in pattern recognition problems frequently. The AdaBoost makes a learning hypothesis by using a given learning algorithm in a round and this is the weak learning algorithm. In the round, learning data is re-sampling by the given probability distribution and learning is done. In the next round, probability is updated often to choose data that was mistaken in a round. By repeat this round, it can get plural hypotheses that have different character each for. The strong algorithm gives unification by majority decisions for each weak learning algorithm with weight. Figure 10 shows a conception of AdaBoost. AdaBoost does recognition by combining plural weak learning algorithms like Figure 10. Restriction is none for each weak learning algorithm. Even equal numbers of input and output data one allowed using differences all algorithms. AdaBoost can have the same merit as soft computing methods like neural network by using for the weak algorithm.
Learning Techniques
Published in Peter Wlodarczak, Machine Learning and its Applications, 2019
In boosting, a model is incrementally created. Each model is trained on the instances the previous model misclassified. The most popular boosting algorithm is AdaBoost, adaptive boosting. AdaBoost is less prone to overfitting, but it is sensitive to noise in the data or to outliers. AdaBoost is a heuristic approach that potentially improves the performance of the learner. A problem of machine learning is that there are potentially a large number of features, the problem of high dimensionality. AdaBoost only selects the features that potentially improve the predicting power of the model by reducing the dimensionality and, thus, reduces execution time.
Optimised ensemble learning-based IoT-enabled heart disease monitoring system: an optimal fuzzy ranking concept
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
N.V.L.M Krishna Munagala, Lakshmi Rajeswara Rao Langoju, A. Daisy Rani, D.V.rama Koti Reddy
Reasons for making the improvement in the classifiers: Auto encoders suffer from some common challenges like misunderstanding of important variables, imperfect decoding, too lossy and insufficient training data. Even though the AdaBoost technique is useful for processing imbalanced data, it plans to work more on the misclassified samples than samples of minority classes, which leads to the generation of more useless or redundant weak classifiers. As a result, AdaBoost suffers from performance reduction and overhead issues. LSTMs are easy to overfit, take longer time and have more memory for training. However, fuzzy logic is completely based on human expertise and intelligence. There is a need to constantly update the rules of the fuzzy system so that it may cause inaccurate results.
Research on real – time tracking of table tennis ball based on machine learning with low-speed camera
Published in Systems Science & Control Engineering, 2018
Yun-Feng Ji, Jian-Wei Zhang, Zhi-hao Shi, Mei-Han Liu, Jie Ren
The matching of images is generally divided into two steps: image preprocessing and feature matching. At present, the commonly used methods of preprocessing are: LBP (Local Binary Patterns), LGP (Local Gradient Patterns) and HoG (Histogram of Oriented Gradient). Generally, LBP and LGP algorithm is sensitive to the local texture features, while HoG algorithm maintains good invariance for optical geometry and image deformation. These preprocessing method can extract the characteristics of the image. As to training method, SVM (Support Vector Machine) and Adaboost algorithm are most widely used to classify the image features. The SVM algorithm can use the kernel function to fit the maximum interval hyperplane in the high-dimensional feature space, which is suitable for the classification of nonlinear data sets. Adaboost algorithm is the accumulation of several weak classifiers, and it has good adaptability to the classification of unknown data.
Assessment of supervised machine learning algorithms using dynamic API calls for malware detection
Published in International Journal of Computers and Applications, 2022
Gradient boosting uses different loss functions and error measuring criteria than Adaboost. Like Adaboost, in GB models produced better results as we increased the value of n-estimator. But here we want to analyze the impact of loss and error calculating criteria. Exponential loss function gave a better result than the default deviance loss function. Three error measuring functions MSE, Friedman_MSE, and MAE are in malware classifiers which are used to update the weight of next iteration accordingly. As shown in Table 7, Friedman_MSE produced the highest accuracy results with an exponential loss function.