Explore chapters and articles related to this topic
Artificial Neural Networks and Learning Systems
Published in Yi Chen, Yun Li, Computational Intelligence Assisted Design, 2018
Winner takes‐all: A mechanism that permits the neurons to compete for the right to respond to a given subset of inputs, such that only one output neuron, or only one neuron per group, is active (i.e., ‘on’) at a time.
Soft Computing Technique in the Water Sector: Artificial Neural Network Approach
Published in Surendra Kumar Chandniha, Anil Kumar Lohani, Gopal Krishan, Ajay Krishna Prabhakar, Advances in Hydrology and Climate Change, 2023
Himanshu Panjiar, Ankit Chakravarti
In the case of unsupervised learning, some of the commonly employed learning rules are as follows: (1) Hebbian learning, (2) Competitive learning, and (3) Learning vector quantization. Hebbian learning: This learning is actually instigated by the weight adjustment mechanism of biological neural. This method converts a neuron inability to learn and enables it to develop cognition with response to external stimuli whenever some input is given in to the network. This learning rule is not used in the supervised learning, as it can be seen from its formulation (eq 4.4) that there is no involvement of any target output value (Tayfur, 2012): wijnew=wijold−α[f(wijxi)]xiCompetitive learning: This learning rule is a form of unsupervised learning, in which each hidden nodes compete for the right to react to a subset of the input vector data. This learning process is usually applied to ANN that contains a hidden layer, which is also known as competitive layer. Each competitive neuron is described by a weight vector wij and computes the similarity measure between the weight vector and the input data xn. For every input vector data, each competitive neuron is computing its value separately and “compete” with each other to check similarity with particular input vector. The winner competitive neuron sets its output as “1” and all other competitive neurons set their output as ‘0”. The competitive neuron which wins in the competition is called a “winner-take-all” neuron. This learning rule is highly suitable to find clusters within data.Learning vector quantization (LVQ): This learning rule is a form of unsupervised learning, in which pattern, classification is taking place. LUQ is a special class of competitive learning and uses supervised learning methodology, in which the set of training patterns with known classifications (i.e., input vector and its respective output) is given to the network with an initial distribution of the reference weight vectors. LUQ classify an input vector by assigning it to the same class as that of the output unit, which has its weight vector very close to the input vector, so LUQ adjusts the boundaries between categories to minimize existing misclassification (Sivanandam and Deepa, 2011).
W
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
winding factor winding factor a design parameter for electric machines that is the product of the pitch factor and the distribution factor. window any appropriate function that multiplies the data with the intent to minimize the distortions of the Fourier spectra. window operation an image processing operation in which the new value assigned to a given pixel depends on all the pixels within a window centered at that pixel location. windowing (1) a term used to describe various techniques for preconditioning a discrete-time waveform before processing by algorithms such as the discrete Fourier transform. Typical applications include extracting a finite duration approximation of an infinite duration waveform. (2) the process of opening a window. In signal processing, it is common to open only a certain restricted portion of the available data for processing at any one time: such a portion is called a window or sometimes a mask or neighborhood. For instance, in FIR filter design, a technique known as windowing is used for truncation in order to design an FIR filter. The design of window becomes crucial in the design. In image processing, it is a common practice that a square window of (for example) 3 × 3 pixels is opened centered at a pixel under consideration. In this window operation, the gray level of the pixel is replaced by a function of its original gray level and the gray levels of other pixels in the window. Different functions represent different operations: in particular, they will be suitable for different filtering or shape analysis tasks. See median filter, thinning. Windscale incident a nuclear power plant accident at the Windscale plant in Great Britain. winner-take-all network a network in which learning is competitive in some sense; for each input a particular neuron is declared the "winner" and allowed to adjust its weights. After learning, for any given input, only one neuron turns on. wiped joint a fused joint used in splicing leadsheathed cables. wipe system in television, a system that allows the fading in of one channel of video as a second channel of video is faded out without loss of sync. wired OR a circuit that performs an OR operation by the interconnection of gate outputs without using an explicit gate device. An open collector bus performs a wired OR function on active-low signals. wireframe a model that approximately represents a solid object by using several hundreds of triangles. It is used in applications such as facial coding, facial recognition and industrial component mensuration. wireless local area network (WLAN) a computer network that allows the transfer of data without wired connections. wireless local loop a wireless connection (using a radio link) between a subscriber terminal (for example, a telephone) and the local exchange of the public switched network. withstand rating the maximum voltage that electrical equipment can safely withstand, without failure, under specified test conditions. withstand test a test of an insulator's ability to withstand a high voltage of some specified waveform. WLAN WMSE See wireless local area network. weighted mean-squared error.
Using artificial neural network-self-organising map for data clustering of marine engine condition monitoring applications
Published in Ships and Offshore Structures, 2018
Yiannis Raptodimos, Iraklis Lazakis
SOMs are a class of ANN with neurons arranged in a one- or two-dimensional structure and trained by an iterative unsupervised or self-organising procedure (Yan 2014). They find clusters in the data by evaluating neighbourhood measures and employing competitive strategies. These networks are based on competitive learning. The output neurons of the network compete among themselves to be activated or fired, with the results that only one output neuron, or one neuron per group is on at any one time. An output neuron that wins the competition is called a winner-takes-all neuron or winning neuron. Every data item is mapped into one point (node) in the map and the distances of the items in the map reflect similarities between the items (Kohonen 1998). Figure 1 shows the basic structure of the SOM.
A comprehensive survey on machine learning approaches for dynamic spectrum access in cognitive radio networks
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Unsupervised competitive learning is a form of unsupervised training where output neurons are said to compete for input patterns. It employs a winner-take-all strategy since only winning neuron is updated. Two major techniques of unsupervised competitive learning are a. Self-Organising Maps/Kohenon Maps