Explore chapters and articles related to this topic
Applications in Iron and Steel Making
Published in Nirupam Chakraborti, Data-Driven Evolutionary Modeling in Materials Technology, 2023
In his detailed study and the journal articles that resulted therefrom, Vuolio (2021) emphasized on the nonlinear nature of the desulphurization kinetics, rendering the interactions between the variables dependent and independent, also nonlinear. Owing to this reason, a log-linear form of the prediction equation was formulated, which ultimately led to a ‘conditioned least-squares objective function.’ The data-driven model here was essentially a feedforward neural network. A single hidden layer was used to avoid data overfitting. These single layer feedforward neural networks often require many epochs to train, which can be very significantly speeded up by introducing the Extreme Learning Machine (ELM) concept (Huang et al., 2006). ELM requires the activation functions in the hidden layer to be infinitely differentiable. Once that is assured, the weights and biases in the lower part of the network are randomly added, while the output weights are analytically determined. Vuolio (2021) included an ELM strategy in the inner loop of their data-driven modeling, thereby substantially reducing the computing load. However, their final model was trained using a Bayesian approach. For the variable selection (Vuolio 2021) had employed binary and integer-coded genetic algorithms. The strategy was to carry out variable selection and optimization of number of hidden neurons in tandem. Their models worked quite well for the noisy desulfurization data from a Nordic steel plant.
A Study on the Extreme Learning Machine and Its Applications
Published in Sk Md Obaidullah, KC Santosh, Teresa Gonçalves, Nibaran Das, Kaushik Roy, Document Processing Using Machine Learning, 2019
Himadri Mukherjee, Sahana Das, Subhashmita Ghosh, Sk Md Obaidullah, KC Santosh, Nibaran Das, Kaushik Roy
The primary issue for the establishment of extreme learning machines (ELMs) is the slow learning rate that occurs in neural networks. The ELM algorithm is near the single hidden layer feed forward neural network (SHLFFNN) for regression as well as classification with three prime differences. Those are as follows: The number of neural network hidden neurons which use back propagation algorithm for training is much less than the number of ELM hidden neurons.By utilizing the values from sustained uniform distribution, the weights are randomly created from input to hidden layer, called ‘random projection’. To solve the output weight, the least square error regression is used, i.e. the Moore–Penrose generalized algorithm.ELMs are the contemporary flavor of the random projection, accompanied by some hypothetical properties which can potentially fit any structure along with a least square procedure. The concept of this learning machine lies on the factual theory of risk minimization and only one iteration is needed to finish the learning process.
Spatial domain steganographic method detection using kernel extreme learning machine
Published in Brij B. Gupta, Nadia Nedjah, Safety, Security, and Reliability of Robotic Systems, 2020
We can see that previous works discussed above can be aligned in two broad directions: first, in which researchers experimented using different image feature sets and their combinations with SVM as the learning algorithm and second, where researchers deviated from the norm of using SVM for classification and rather used other machine learning tools and frameworks. Deriving motivation from this, for the present paper, we use KELM for this multi-class classification objective. Extreme learning machine (ELM) is a single-layer feedforward network proposed by Huang et al. (2006). It offers great scalability, generalizability, and performs impartially for binary classification, multi-class classification, and regression problems.
A time efficient offline handwritten character recognition using convolutional extreme learning machine
Published in The Imaging Science Journal, 2023
Raghunath Dey, Jayashree Piri, Dayal Kumar Behera, Asif Uddin Khan
Around a decade ago, deep learning (DL) and classical machine learning approaches paved the way for extreme learning machines (ELMs). The ELMs employ random weights and biases fundamentally different from regular DL. The ELM approaches based on a single-layer feed-forward neural network (SLFFNN) is simple and quick [4]. The DL architecture performs admirably compared to neural networks with only a few hidden layers. An ELM is a feed-forward neural network with a random bias hidden layer. The output is a weight vector matrix that uses input weights and hidden neurons. A back-propagation feed-forward neural network (BPFFNN) maybe 1000 times slower than an ELM when dealing with N numbers of data [5]. Before seeing the training input, the activation function parameters are determined randomly in ELMs. In classic neural networks, these parameters are dataset-dependent.
Estimating Spinning Reserve Capacity With Extreme Learning Machine Method in Advanced Power Systems Under Ancillary Services Instructions
Published in Electric Power Components and Systems, 2022
Extreme learning machine (ELM) is a single hidden layer feed forward artificial neural network (ANN) model whose input weights and random output weights are calculated analytically. In ELM, in addition to activation functions such as sigmodial, sine, Guassian and hard limit in the latent layer, unlike ANN, non-derivative or discrete activation functions can be used [35, 36]. The performance of traditional feed forward neural networks is based on momentum, learning rate etc. depends on some parameters. In such networks, parameters such as weights and threshold values need to be updated with gradient-based learning algorithms. However, to ensure good performance, the learning process takes time, and the error can be stuck at a local point. Changing the momentum value may prevent the error from getting stuck in a local point, but it will not affect the long learning process [37]. The ELM network is a customized version of a single hidden layer feed forward ANN model. Figure 5 shows the figure of a single hidden layer feed forward ANN.
Predicting the concentration of indoor culturable fungi using a kernel-based extreme learning machine (K-ELM)
Published in International Journal of Environmental Health Research, 2020
Zhijian Liu, Shengyuan Ma, Lifeng Wu, Hang Yin, Guoqing Cao
Although the introduction of multiple influence factors can provide abundant information for modeling, the increment of input variables will also increase model complexity, which would reduce the generalizability of the prediction models. Compared with traditional learning algorithms, extreme learning machine (ELM) is a better choice with good generalization capabilities, which has a faster learning speed and could work with multiple activation functions (Javed et al. 2014). This study adopted kernel-based extreme learning machine (K-ELM), where kernel functions are introduced to ELM in order to improve the generalization capabilities. The objective of this study is to identify the optimal combination of input parameters for the prediction of ICF. Therefore, the indoor environmental parameters (temperature, RH and CO2 concentration) and indoor or outdoor concentrations of PM10 and PM2.5 in 85 residential buildings located in Baoding, China were measured in order to investigate the optimal parameter combination. However, it is worth noting that the optimal selection is not always equivalent to the most relevant parameter combination with only minor improvements but more complexity. An optimal input combination should simultaneously consider both high accuracy and simplicity.