Explore chapters and articles related to this topic
Concepts of Mathematical Neurobiology
Published in Perambur S. Neelakanta, Dolores F. De Groff, Neural Network Modeling, 2018
Perambur S. Neelakanta, Dolores F. De Groff
The learning rules indicated before are pertinent to two strategies, unsupervised learning rule and supervised learning rule. The unsupervised version (also known as Hebbian learning) is such that, when unit i and j are simultaneously excited, the strength of the connection between them increases in proportion to the product of their activation. The network is trained without the aid of a teacher via a training set consisting of input training patterns only. The network learns to adapt based on the experiences collected through the previous training patterns.
Application of Artificial Intelligence Techniques in the Early-Stage Detection of Chronic Kidney Disease
Published in Pallavi Vijay Chavan, Parikshit N Mahalle, Ramchandra Mangrulkar, Idongesit Williams, Data Science, 2022
Anindita A. Khade, Amarsinh V. Vidhate
ANN is made up of nodes which connect many artificial neurons also named as processing units. Input and output are parts of the processing units. Based on any aforementioned scheme, the input units collect various forms of information, and the neural network attempts to learn about the information supplied in order to provide a meaningful output. Backpropagation, or backward propagation of error, is a set of learning rules used by ANNs to improve their performance results, much like people need rules and instructions to produce a result or output (Figure 8.3).
Neural Networks and Fuzzy Systems
Published in Jerry C. Whitaker, Microelectronics, 2018
The training process starts usually with values of all weights set to zero. This learning rule can be used for both soft and hard threshold neurons. Since desired responses of neurons is not used in the learning procedure, this is the unsupervised learning rule. The absolute values of the weights are usually proportional to the learning time, which is undesired.
A new approach based on current controlled hybrid power compensator for power quality improvement using time series neural network
Published in Automatika, 2023
Raheni T. D, P. Thirumoorthi, Premalatha K
During the learning process, the network is highly refined for nonlinear mapping and harmonic compensation with load balancing is achieved. Operation of training dataset is obtained by PI-tuned HOSMC. The network connects a set of numeric inputs and outputs to a dataset. Through the regression process, the network gets trained and error is minimized between the actual output and the essential target data [21] as per the below Equation, The learning rule changes the internal weight of the neuron, which represents the data set to reduce the error function. The samples of DC link and reference DC link current obtained from PI controller are chosen for training the network. If the network satisfies the validation process, time series Simulink model is generated. The analysis of the neural network is continued until the data satisfy with good results. The results are compared with classical PI-tuned HOSMC.
Using soft computing and machine learning algorithms to predict the discharge coefficient of curved labyrinth overflows
Published in Engineering Applications of Computational Fluid Mechanics, 2021
Zhenlong Hu, Hojat Karami, Alireza Rezaei, Yashar DadrasAjirlou, Md. Jalil Piran, Shahab S. Band, Kwok-Wing Chau, Amir Mosavi
The ANFIS method was introduced by Jang (1993). The ANFIS structure has five layers, including input, base, middle, defuzzification layer and summation layer, and are directly related to each other. Each node has a function with adjustable or fixed parameters. The appropriate structure is selected based on input data, membership rank, input, and output membership rules and functions. In the training phase, by modifying the membership degree parameters based on the acceptable error rate, the input values are closer to the actual values. The ANFIS technique uses neural network learning algorithms and fuzzy logic to design nonlinear mapping between input and output space and has good training, fabrication and classification capabilities. It also has the advantage of allowing the extraction of fuzzy rules from numerical information or expert knowledge and comparatively forming a rule-foundation. In addition, it can regulate the complex transformation of human intelligence into fuzzy systems. Its learning rule is based on the error propagation algorithm to minimize the average squares of error between the network output and the actual output. The operation of the ANFIS model was briefly presented by Jang (1993).
Quantitative nondestructive testing of wire rope based on pseudo-color image enhancement technology
Published in Nondestructive Testing and Evaluation, 2019
A BP network is a multi-layer feed-forward neural network, which is trained by the error BP algorithm. It has the advantages of being easy to implement, calculation-limited, strong in parallelism and widely applicable. BP networks learn and store a large number of input and output mapping relationships without requiring the mathematical relationship describing the output to input in advance. The learning rule is to adopt the steepest descent method and to continuously adjust the weights and thresholds of the network, through back propagation, to minimize the sum of squared errors. The topological structure of the BP network includes an input layer, one or more hidden layers and an output layer. The three-layer BP network can fit any nonlinear curve that might be generated in a regression analysis.