Explore chapters and articles related to this topic
Photonic Reservoir Computing
Published in Paul R. Prucnal, Bhavin J. Shastri, Malvin Carl Teich, Neuromorphic Photonics, 2017
Paul R. Prucnal, Bhavin J. Shastri, Malvin Carl Teich
Maass et al. [13] and Jaeger [6] independently proposed similar approaches as alternative frameworks to RNNs for neuronal computation to perform temporal and spatiotemporal information processing of sensory inputs in real time. The liquid state machine (LSM) by Maass et al. and the echo state networks (ESNs) by Jaeger entail employing a large, distributed dynamical hidden random recurrent network with fixed (untrained) weights, called a reservoir. The adaptation is restricted to readout where any type of classifier or regressor, ranging from a perceptron [14] to a support vector machine (SVM) [15], can be used to generate the output. This type of readout function offers some convincing advantages—it greatly reduces the complexity of training RNNs in practical applications, and it avoids the biological implausibility of multilayer gradient descent optimization of previous RNNs [16], while maintaining the capability to perform context dependent computations [1, 2, 17].
Uploading Consciousness—The Evolution of Conscious Machines of the Future
Published in Anirban Bandyopadhyay, Nanobrain, 2020
A liquid state machine (LSM) is a computational construct like a neural network. An LSM consists of a large collection of units (called nodes, or neurons). Each node receives time-varying input from external sources (the inputs) as well as from other nodes. Nodes are randomly connected to each other. The recurrent nature of the connections turns the time-varying input into a spatio-temporal pattern of activations in the network nodes. The spatio-temporal patterns of activation are read out by linear discriminant units (Maass et al., 2002).
Time series forecasting for port throughput using recurrent neural network algorithm
Published in Journal of International Maritime Safety, Environmental Affairs, and Shipping, 2021
Nguyen Duy Tan, Hwang Chan Yu, Le Ngoc Bao Long, Sam-Sang You
The recurrent neural network (RNNs) represent a large and varied class of computational models that are designed by more or less detailed analogy with biological brain (neural computing) modules (Lukoševičius and Jaeger 2009). RNN is a class of artificial neural network that permits continuing information related to past knowledge by utilizing a special kind of looped architecture. In fact, the machine learning schemes are employed in many areas regarding data with sequences, such as predicting next word of a sentence. The echo state network (ESN) is a type of reservoir computing (RC) and it shares the basic structure with liquid state machine (LSM) where the internal layers are fixed with random weights, and only the output layer is updated with the weight. They belong to the RNN family and the supervised machine learning principle. Since they are dynamically characterized, they are typically used for learning dynamical processes, modelling of a biological system, signal forecasting and generators.
KNOWM memristors in a bridge synapse delay-based reservoir computing system for detection of epileptic seizures
Published in International Journal of Parallel, Emergent and Distributed Systems, 2022
Dawid Przyczyna, Grzegorz Hess, Konrad Szaciłowski
Many scientists conduct intensive research that draws inspiration from biological neural structures in order to achieve more efficient computational structures, potentially of a universal nature. The high popularity is evident by an increasing number of publications appearing each year, scientific journals, and numerous conference meetings related to this subject. Interested readers are referred to recent reviews on the subject [27–30]. As can be seen, neuromorphic computing appears to be moving towards ‘conventional’ computing, or at least towards special purpose computing modules [31]. Recurrent neural networks, which are especially good at representing dynamics of given input due to feedback loops present in the system, exhibit problems with costly learning process. To solve this problem, Jäger and Maas independently proposed the Echo State Network (ESN) [32] and the Liquid State Machine (LSM) approaches [33]. In their constructs, in contract with artificial neural networks, the information processing layer is not to be trained, only the readout layer is subjected to training procedures. Thus, they suggested the importance of the multidimensional, rich, and dynamic state space of the information processing layer [26,34]. Over time, both of these approaches to efficient training of recurrent neural networks were incorporated into a common conceptual framework named Reservoir Computing (RC) and the information processing layer was named the ‘reservoir’ [35,36]. The reservoir in the RC paradigm describes a computational substrate capable of representing various inputs in a multidimensional configuration space of states, where computation is represented as a trajectory between successive states of the system in this space. Hence, as a proof of concept, RC has been implemented in simple systems as a bucket of water, where the data set with human speech was encoded as a series of water splashes [37]. Pictures of the perturbed water surface were used as basis for classification tasks. In our previous work we have shown that RC computing systems can be implemented even in such primitive setups as a doped cement and successfully used to classify simple signal according to its shape [38].