Explore chapters and articles related to this topic
Image Classification and Retrieval
Published in R. Suganya, S. Rajaram, A. Sheik Abdullah, Big Data in Medical Image Processing, 2018
R. Suganya, S. Rajaram, A. Sheik Abdullah
The Self Organizing Map (SOM) is an unsupervised learning algorithm where the input patterns are freely distributed over the output node matrix (Kohonen 1990). SOM utilizes the concept of competitive learning for medical image classification. Competitive learning is an adaptive process in which the neurons gradually become sensitive to different input categories. The SOM is a type of artificial neural network that produces a low-dimensional discretized representation of the input space of the training samples called a map. SOM uses a neighborhood function to preserve the topological properties of the input space. Klein Gebbinck et al. (1993) applied discriminate analysis with a neural network to differentiate between various types of diffuse liver diseases.
Unsupervised Learning
Published in Stephen Marsland, Machine Learning, 2014
So, we can implement the k-means algorithm using a set of neurons. We will use just one layer of neurons, together with some input nodes, and no bias node. The first layer will be the inputs, which don’t do any computation, as usual, and the second layer will be a layer of competitive neurons, that is, neurons that ‘compete’ to fire, with only one of them actually succeeding. Only one cluster centre can represent a particular input vector, and so we will choose the neuron with the highest activation h to be the one that fires. This is known as winner-takes-all activation, and it is an example of competitive learning, since the set of neurons compete with each other to fire, with the winner being the one that best matches (i.e., is closest to) the input. Competitive learning is sometimes said to lead to grandmother cells, because each neuron in the network will learn to recognise one particular feature, and will fire only when that input is seen. You would then have a specific neuron that was trained to recognise your grandmother (and others for anybody else/anything else that you see often).
Power Amplifier Behavioral Model and Nonlinear Analysis Basis
Published in Jingchang Nan, Mingming Gao, Nonlinear Modeling Analysis and Predistortion Algorithm Research of Radio Frequency Power Amplifiers, 2021
Competitive learning refers to the competitive ability of the output neurons in the network to respond to external stimuli. In competitive learning, the weight of the winning neuron is modified. That is, if the input state of the winning neuron is 1, the corresponding weight increases; if the state is 0, the weight decreases. In the learning process, the weight of the winning neuron gets more and more approximate to the corresponding input state, while the remaining neurons that fail the competition are suppressed. Kohomen’s self-organization map (SOM) and adaptive resonance theory (ART) both adopt this algorithm.
A Study on the Countermeasures to Improve the Physical and Mental Health of High-Altitude Migrant College Students by Integrating Artificial Intelligence and Martial Arts Morning Practice
Published in Applied Artificial Intelligence, 2023
Since is correlated with the , , it is also called the correlation learning rule. In the competitive learning of the network each output unit competes, and finally only the strongest one can be activated. When the learning system is in a smooth environment (statistical features do not change over time), it is theoretically possible to learn the statistical features of the environment through supervised learning, and these statistical features can be remembered by the neural network as experience; if the environment is non-smooth, the usual supervised learning does not have the ability to track such changes, and some adaptive capability of the network is required to solve this problem.
Impartial competitive learning in multi-layered neural networks
Published in Connection Science, 2023
However, as mentioned above, from the early stage of research, it has been stated that the impartiality should be realised to improve the performance of competitive learning. For example, it has been observed that some neurons can be dead or inactive, preventing all neurons from being equally used. For solving this type of problem, many methods have been developed to eliminate the dead neurons (Banerjee & Ghosh, 2004; Choy & Siu, 1998; DeSieno, 1988; Fritzke, 1993, 1996; Li et al., 2022; Van Hulle, 1999, 2004). In those methods, frequently winning neurons are penalised or the entropy of neurons is forced to be maximised, which is quite similar to the method in this paper. The competitive learning principally aims for neurons to respond to some inputs very specifically, and at the same time, for all neurons to respond equally to those inputs at least on average. Due to the difficulty in dealing with the equal chance, much attention has been paid to the specific responses of neurons. To the best of our knowledge, in the conventional competitive learning, the equal chance to win the competition has been treated secondarily. It should be repeated that the present method is certainly an extension of the conventional method. However, attention is primarily paid to the equal chance for competition, and several computational procedures such as the WTA are secondarily considered. More strongly, the equal chance to win the competition is forced to be realised by using even a higher cost in terms of the strength of connection weights. Because the equal chance is a primary objective of competitive computation, it is absolutely necessary to realise this at any cost and at any time.
An atlas-free newborn brain image segmentation and classification scheme based on SOM-DCNN with sparse auto encoder
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2020
Tushar H. Jaware, K. B. Khanchandani, Durgeshwari Kalal
The self-organising map is a neural network model as shown in Figure 2 and it belongs to the category of competitive learning networks and it is predicated on unsupervised learning, which indicates no human intervention is needed during the learning but there is a need to be known about the characteristics of the input data. The SOM can use for clustering data without knowing the class memberships of the input data and acclimated to detect features characteristics to the quandary. Hence, it is called SOFM (self-organising featuremap).