Explore chapters and articles related to this topic
Population dynamics
Published in A. W. Jayawardena, Environmental and Hydrological Systems Modelling, 2013
where x0 is the value of x at t = 0. Equation 3.7 is also referred to as the sigmoid function that is widely used as a non-linear mapping function in artificial neural networks. When x0=12, Equation 3.7 simplifies to () xt=11+e−rt
Learning deterministic models
Published in Richard E. Neapolitan, Xia Jiang, Artificial Intelligence, 2018
Richard E. Neapolitan, Xia Jiang
The range of the sigmoid function is the interval (0,1). It is used in logistic regression to provide the probability of a binary outcome as follows: P(Y=1|x)=exp(b0+b1x)1+exp(b0+b1x)P(Y=-1|x)=11+exp(b0+b1x). $$ \begin{gathered} P(Y = 1|x) = \frac{{{\text{~exp~}}(b_{0} + b_{1} x)}}{{1 + {\text{~exp~}}(b_{0} + b_{1} x)}} \hfill \\ P(Y = - 1|x) = \frac{1}{{1 + {\text{~exp~}}(b_{0} + b_{1} x)}}. \hfill \\ \end{gathered} $$
Deep learning-based wildfire detection from satellite imagery
Published in Sangeeta Jadhav, Rahul Desai, Ashwini Sapkal, Application of Communication Computational Intelligence and Learning, 2022
The x function curve has an S shape. The range of a sigmoid function is between 0 and 1 (inclusive) and this is the main reason we used this function. Therefore, it is mostly used for training the models where we have to predict the opportunities as an output. If the prediction says that there may be something between the ranges from 0 to 1, using the sigmoid function is the appropriate decision. The sigmoid function is a split that means we can find the x-curve slope at any two points.
A novel combined model based on echo state network – a case study of PM10 and PM2.5 prediction in China
Published in Environmental Technology, 2020
Hairui Zhang, Zhihao Shang, Yanru Song, Zhaoshuang He, Lian Li
The General BP neural network is a fixed topology with an input layer, an implicit layer and an output layer [27]. Then sigmoid function is used as an incentive function to iteratively train weights and thresholds so as to approximate arbitrary nonlinear functions. However, the cascaded neural network used in this paper has many hidden layer structures, and each layer is connected with the output layer. Compared with the standard BP neural network, the Cascade BP also updates the weights and thresholds by means of back propagation, but it has better learning effects and stronger non-linear fitting characteristics. Therefore, this paper uses cascaded BP to build a model for predicting PM2.5 and PM10 data. Figure 4 is the structure of cascaded BP.
Condition assessment of high-speed railway track structure based on sparse Bayesian extreme learning machine and Bayesian hypothesis testing
Published in International Journal of Rail Transportation, 2023
Senrong Wang, Jingze Gao, Chao Lin, Hui Li, Yong Huang
where the elements in matrix are randomly sampled from the uniform distribution, , matrix and the elements in the bias vector, , are randomly sampled from [22]. The sigmoid function, , is an activation function commonly used in neural networks. We define that when it is applied to a matrix, it acts on each element.
Deep learning for real-time social media text classification for situation awareness – using Hurricanes Sandy, Harvey, and Irma as case studies
Published in International Journal of Digital Earth, 2019
Manzhu Yu, Qunying Huang, Han Qin, Chris Scheele, Chaowei Yang
The softmax function is another type of sigmoid function. The sigmoid function can only handle two classes, whereas softmax is effective in handling multi-classification problems. The output of the softmax function can be represented as a categorical distribution. The goal of softmax is to highlight the largest values and suppress values which are significantly below the maximum value. The softmax function can be defined asThe softmax function is ideally used in the output layer of the classifier where we intend to obtain the probabilities to define the class of each input.