Explore chapters and articles related to this topic
Evaluation of factors affecting long-term creep of concrete using machine learning regression models
Published in Joan-Ramon Casas, Dan M. Frangopol, Jose Turmo, Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability, 2022
Machine learning (ML) algorithms are ubiquitous with their easy implementation over time. ML models are often used to find trends and patterns in given observations. For quantitative data, regression models are used for finding patterns. In recent years, many researchers in the field of construction materials have been implementing ML models in predicting compressive strength (Marani et al. 2020), elastic modulus (Han et al. 2020), shear capacity (Solhmirzaei et al. 2020, Vu & Hoang 2015), flexural strength (Kang et al. 2021), cracking propagation (Bayar & Bilir 2019), and drying shrinkage (Bal & Buyle-bodin 2013). ML models were also used in predicting the crack condition of highway pavements (Inkoom et al. 2019), detecting bridges defects (Zhu et al. 2019), assessing bridge condition (Liu & Zhang 2020), predicting crack classification in roads (Jo & Jadidi 2019), and predicting fatigue damage of highway suspension bridge hangers (Deng et al. 2020). Ensemble learning is a machine learning paradigm, which improves stability and fits the base learner models such as decision trees, support vector machines and neural network models by aggregating their outputs to produce a single decision. Ensemble tree machine learning models have proved useful for solving poorly understood and complex problems (Dietterich 2000b).
Smart War on COVID-19 and Global Pandemics
Published in Chhabi Rani Panigrahi, Bibudhendu Pati, Mamata Rath, Rajkumar Buyya, Computational Modeling and Data Analysis in COVID-19 Research, 2021
Anil D. Pathak, Debasis Saran, Sibani Mishra, Madapathi Hitesh, Sivaiah Bathula, Kisor K. Sahu
In general, pandemics spread from region to region just like wildfires. SARS-CoV-2 is no exception (Costantino et al. 2020); being a contagious disease, it spreads via contact, coughing, sneezing, etc. (J. Wang and Du 2020). The time series model can be utilized to forecast the rate of increase of cases for the next few weeks (Weigend 1994). Ensemble learning algorithms can be utilized to enhance the accuracy of the prediction (Giovanni and Elder 2010). Furthermore, blockchain has proved to be effective in crisis management situations such as disaster management (Demir et al. 2018). So, extrapolating its effects to contain a pandemic is a logical extension of the idea and should be seriously evaluated. It can provide vital data to the governments which, if properly used for data mining, can provide vital clues and recommendations as to “how to contain the virus.” It can act as a platform where all the concerned authorities, such as medical professionals, governments, media, researchers, and others, can update each other about the situation and avert an impending danger.
Predictive Analysis of Type 2 Diabetes Using Hybrid ML Model and IoT
Published in Sudhir Kumar Sharma, Bharat Bhushan, Narayan C. Debnath, IoT Security Paradigms and Applications, 2020
Abhishek Sharma, Nikhil Sharma, Ila Kaushik, Santosh Kumar, Naghma Khatoon
The proposed model uses the stacking method. Stacking is an ensemble learning technique in which predictions are taken from multiple classifiers, and these predictions are used to train a meta-classifier. This approach gives a better predictive performance compared to a single model. The base-level models, also known as base learners, are trained using the training set, and then, the model at the final level, also known as the meta-model or meta-classifier as in this case, is trained on the outputs of the base-level models. The features present in outputs from the base models are also known as meta-features. The meta-classifier can be trained on either predicted class output labels or probabilities of an output label from the ensemble. Figure 14.6 shows the schematic of the stacking classifier framework.
The Deep Learning ResNet101 and Ensemble XGBoost Algorithm with Hyperparameters Optimization Accurately Predict the Lung Cancer
Published in Applied Artificial Intelligence, 2023
Saghir Ahmed, Basit Raza, Lal Hussain, Amjad Aldweesh, Abdulfattah Omar, Mohammad Shahbaz Khan, Elsayed Tag Eldin, Muhammad Amin Nadim
This algorithm was proposed by Chen and Guestrin (2016) is a supervised MLalgorithm which implement a boosting process for yielding accurate models. The predictive model on labeled training examples is applied on new unseen examples. The boosting is an ensemble learning method utilized to build many models sequentially, where each model is going to attempt for correcting shortages in the preceding model. XGBoost is a core boosting tree algorithm which build many models sequentially, where each new model is trying to correct the deficiencies in the previous model (Friedman 2001). The XGBoost extends the generalized gradient boosting by including the regularization term to combat overfitting and to support the arbitrary differentiable loss function. These properties made the XGBoost more robust in improving the lung cancer detection performance.
Integrated mixture model and ensemble learning geographic object-based image analysis for road network extraction
Published in Journal of Spatial Science, 2023
Elaveni Palanivel, Shirley Selvan
Ensemble learning involves combining multiple weak classifiers based on a defined set of rules thus improving the overall performance (Zhou 2012). Rather than choosing a single algorithm and trying to fit the model to our data, ensemble learning provides the flexibility to utilise the strength of multiple classifiers. The commonly used ensemble learning techniques are bagging, boosting, and stacking. The bagging technique is preferred for our application as multiple models can be trained independently and parallelly. This reduces the processing time. Thus, bagging is the preferred algorithm. But instead of the most commonly used Random Forest bagging classifier (RFBC) (Zhang et al. 2022), the subspace discriminant classifier (SDC) is used as it is designed to maximise the difference between the means of the classes in the selected feature subspace, which can lead to better discrimination between classes compared to RFBC, which does not explicitly optimise for class separation. SDC also has a low variance which helps avoid overfitting
A generic evolutionary ensemble learning framework for surface roughness prediction in manufacturing
Published in International Journal of Computer Integrated Manufacturing, 2023
Shutong Xie, Zongbao He, Chunjin Wang, Chao Liu, Xiaolong Ke
Ensemble learning methods can integrate multiple simple models to obtain more accurate prediction results. Compared with neural network methods, ensemble learning methods require less data. Intelligent optimization algorithms such as GA can optimize the weight assignment of the ensemble learning framework to obtain better solutions and have been successfully applied in the industry (Yin et al. 2019; Jing et al. 2020; Fountas and Vaxevanidis 2021a, 2021b). Therefore, in this paper, a novel Generic Evolutionary Ensemble Learning (GEEL) Framework is designed. The method combines Random Forest (RF), Extreme Gradient Boosting (XGBoost), Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net Regression (ENR), Ridge Regression (RR), Support Vector Regression (SVR), Linear Regression (LR), Gradient Boosting Regression (GBR), Stochastic Gradient Descent Regression (SGDR) and Extra-Trees Regressor (ETR) to optimize the weights of different models using weight averaging based on GA. The proposed method is more compatible with small sample data as compared to the neural networks. Moreover, it can obtain better integration performance than traditional machine learning methods.