Explore chapters and articles related to this topic
Artificial Intelligence
Published in Ravi Das, Practical AI for Cybersecurity, 2021
There were also two major developments at this time with regards to Deep Learning: In 1980, Kunihiko Fukushima developed an Artificial Intelligence called the “Neocognitron.” This was the precursor to the birth of what are known as “Convolutional Neural Networks,” or “CNNs” for short. This was based upon the processes that are found in the visual cortex of various kinds of animals.In 1982, John Hopfield developed another Artificial Intelligence system called “Hopfield Networks.” This laid down the groundwork for what are known as “Recurrent Neural Networks,” or “RNNs” for short.
Contending Philosophical Frameworks Within Artificial Intelligence
Published in Alessio Plebe, Pietro Perconti, The Future of the Artificial Mind, 2021
Alessio Plebe, Pietro Perconti
Let us give a short account of two other proposals that tried to maintain the simplicity of artificial neural networks reflecting, at the same time, some aspect of the brain mechanisms. One is the Neocognitron, introduced by Kunihiko Fukushima (1980, 1988). It alternates layers of S-cell type units with C-cell type units, and those names are evocative of the classification in simple and complex cells in the primary visual cortex by Hubel and Wiesel (1962, 1968). The S-units act as convolution kernels, while the C-units downsample the images resulting from the convolution by spatial averaging. The crucial difference from conventional convolution in image processing (Rosenfeld and Kak, 1982; Bracewell, 2003) is that the kernels are learned by self-organization. The algorithm is similar to that of the SOM with a winner-take-all strategy; only the weights of the maximum responding S units, within a certain area, are modified, together with those of neighboring cells.
Recognition of Rotated Patterns Using a Neocognitron
Published in Lakhmi C. Jain, Beatrice Lazzerini, KNOWLEDGE-BASED INTELLIGENT TECHNIQUES in CHARACTER RECOGNITION, 2020
S. Satoh, J. Kuroiwa, H. Aso, S. Miyake
A neocognitron is a neural network model which is considerably robust for distortion, scaling, and/or translation of patterns. However, it cannot recognize largely rotated patterns. A rotation-invariant neocognitron is constructed by extending the neocognitron which can recognize translated, scaled and/or distorted patterns from training ones. The rotation-invariant neocognitron is proposed in order to support the weak point of the original neocognitron. Numerical simulations show that the rotation-invariant neocognitron can correctly recognize input patterns without being affected by global and/or local rotation as well as translation, scaling and distortion from training patterns.
Predicting tree failure likelihood for utility risk mitigation via a convolutional neural network
Published in Sustainable and Resilient Infrastructure, 2023
Atanas Apostolov, Jimi Oke, Ryan Suttle, Sanjay Arwade, Brian Kane
At the heart of AI-based visual analytic framework lies the convolutional neural network (CNN), which can be trained to perform tasks such as image classification, object and text detection, document tracking, among others (Gu, Wang, Kuen, et al., 2018). The groundbreaking study of Hubel and Wiesel (1959) showed that visual perception in cats was a result of the activation or inhibition of groups of cells in the visual cortex known as ‘receptive fields.’ Further, they attempted to map the cortical architecture in cats and monkeys (Hubel & Wiesel, 1962, 1965, 1968). Subsequent attempts were then made to model neural networks that could be trained to automatically recognize visual patterns with modest performance (Fukushima, 1975; Giebel, 1971; Kabrisky, 1966; Rosenblatt, 1962). However, the breakthrough in artificial intelligence (AI) for image recognition came with the ‘neocognitron’ (Fukushima, 1980), which was a self-learning neural network for pattern recognition that was robust to changes in position and shape distortion, a problem that plagued earlier efforts, including the ‘cognitron’ also proposed by Fukushima (1975).
Trend and seasonality features extraction with pre-trained CNN and recurrence plot
Published in International Journal of Production Research, 2023
Fernanda Strozzi, Rossella Pozzi
CNNs are famous deep NNs, feed-forward NNs, based on the functioning of the natural visual perception of the humans. First proposed in 1980 by Kunihiko Fukushima, as ‘neocognitron’, this version of the NN was already able to perform effective identification of patterns without being affected by any modification in position or even by distortion in structure. The current structure of CNN, known as ‘LeNet-5’, was constituted in 1990 and presented by LeCun et al. (1998). The improvements of this version led the recognition of visual structures directly from pixels almost without pre-processing. Later, several nets developed deeper architectures that, due to nonlinearity, better estimate the target function and represent the feature, overcoming limitations. These approaches evolved to simplify the deeper NN so that fewer parameters should be trained (Gu et al. 2018; Zhang, Yu, and Wang 2021).
Deep reinforcement learning-based path planning of underactuated surface vessels
Published in Cyber-Physical Systems, 2019
Hongwei Xu, Ning Wang, Hong Zhao, Zhongjiu Zheng
Convolutional neural network (CNN) is an efficient identification method which has been developed in recent years. In 1962, Hubel and Wiesel [22] studied the cat’s visual cortex systematically and found that each neuron could only process a small area of visual images and then proposed the concept of receptive field. Fukushima [23] proposed Neocognitron based on the concept of receptive field in the mid-1980s. Since then, researchers have tried to use multilayer perceptron instead of manual feature extraction. Until 1990, Lecun [24] first proposed a true convolution neural network.