Explore chapters and articles related to this topic
Methods of visual perception and memory modelling
Published in Limiao Deng, Cognitive and Neural Modelling for Visual Information Representation and Memorization, 2022
There are some serious defects in the original form of CLS episodic memory model. In particular, the recognition ability of neocortex network is far lower than that of human recognition memory. This problem is due to the fact that Hebbian learning rules are not wise enough to regulate synaptic strength. Hebbian learning strengthens synapses between active units, even if memories are strong enough to support recall, and weaken the connections between active receiving units and other inactive units, even if those units no longer interfere with memories. This problem can be solved by error-driven learning, which compares the top-down expectation with the perceptual input, and modifies the synapse only when the expectation of the model is not accurate.
Introduction to Neural Networks, Fuzzy Systems, Genetic Algorithms, and their Fusion
Published in Lakhmi C. Jain, N.M. Martin, Fusion of Neural Networks, Fuzzy Systems, and Genetic Algorithms, 2020
The first significant paper on artificial neural networks is generally considered to be that of McCullock and Pitts [2] in 1943. This paper outlined some concepts concerning how biological neurons could be expected to operate. The neuron models proposed were modeled by simple arrangements of hardware that attempted to mimic the performance of the single neural cell. In 1949 Hebb [3] formed the basis of ‘Hebbian learning’ which is now regarded as an important part of ANN theory. The basic concept underlying ‘Hebbian learning’ is the principle that every time a neural connection is used, the pathway is strengthened. About this time of neural network development, the digital computer became more widely available and its availability proved to be of great practical value in the further investigation of ANN performance. In 1958 Neumann proposed modeling the brain performance using items of computer hardware available at that time. Rosenblatt [4] constructed neuron models in hardware during 1957. These models ultimately resulted in the concept of the Perceptron. This was an important development and the underlying concept is still in wide use today. Widrow and Hoff [5] were responsible for simplified artificial neuron development. First the ADALINE and then the Μ ADALINE networks. The name ‘ADALINE’ comes from ADAptive Linear NEuron, and the name ‘MADALINE’ comes from Multiple ADALINE.
H
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
Heaviside characteristic an activation function according to which the neuron output takes on the value unity when its weighted input exceeds (or equals) the neural threshold value and zero otherwise. Heaviside, Oliver (1850-1925) Born: London, England Best known for his theoretical work in electrical engineering. Much of his work is contained in his three-volume work called Electromagnetic Theory. The final volume was published in 1912. Heaviside extended and improved the works of Hamilton and Maxwell and deduced the concepts of capacitance, impedance, and inductance. Heaviside was self-taught and irascible. At first, much of his work was dismissed as unorthodox or too theoretical to be of practical value. Heaviside is best known in physics for his correct prediction of the existence of the ionosphere. heavy water water in which a heavy isotope of hydrogen substitutes for the hydrogen atoms. Hebbian algorithm in general, a method of updating the synaptic weight of a neuron w i using the product of the value of the ith input neuron, xi , with the output value of the neuron y. A simple example is: w i (n + 1) = w i (n) + y(n)xi (n) where n represents the nth iteration and is a learning-rate parameter. Hebbian learning a method of modifying synaptic weights such that the strength of a synapic weight is increased if both the presynaptic neuron and postsynapic neuron are active simultaneously (synchronously) and decreased if the activity is asynchronous. In artificial neural networks, the
A spiking half-cognitive model for classification
Published in Connection Science, 2018
Christian R. Huyck, Ritwik Kulkarni
Classification of input stimuli posits itself as the first building block of higher order cognitive functions. It has been studied intensely for several decades in terms of computational learning since the advent of Perceptrons in 1957 and before that for several centuries as a philosophical discussion since Plato's ideas of clustering “similar” objects. This paper proposes a cognitive model of classification based on simulated spiking neurons that learn via a Hebbian learning mechanism. Our aim, rather than to build a perfect classifier, is to build one that reproduces error in a similar manner to humans when they classify.