Explore chapters and articles related to this topic
Integrated Photonics for Artificial Intelligence Applications
Published in Sandeep Saini, Kusum Lata, G.R. Sinha, VLSI and Hardware Implementations Using Modern Machine Learning Methods, 2021
Ankur Saharia, Kamal Kishor Choure, Nitesh Mudgal, Rahul Pandey, Dinesh Bhatia, Manish Tiwari, Ghanshyam Singh
Reservoir computing is one of the most versatile computing types among the other types of neuromorphic computing. Reservoir computing is primarily a recurrent neural-network–based scheme. To improve the computational efficiency of reservoir computing, there needs to be a proper design for the recurrent neural network (RNN)-based reservoir. As shown in Figure 15.5, reservoir computing consists of an input layer through which the input data are transformed into high-dimensional space. The main purpose of the reservoir tank is to nonlinearly convert the provided input into a high-dimensional space that helps the learning algorithms. The internal points of the reservoir tank are also referred to as “reservoir states”. The weights connected input layer and reservoir are represented as Win while the weights between the reservoir and output layer are represented as Wout. In reservoir computing, the output weights are updated and the input and reservoir are in fixed states. Reservoir computing can be realized with the help of components, devices, and substrates. In reservoir computing, the concept of time-delay reservoir computing is simple, has low power consumption, and uses less hardware [23,24].
Photonic Reservoir Computing
Published in Paul R. Prucnal, Bhavin J. Shastri, Malvin Carl Teich, Neuromorphic Photonics, 2017
Paul R. Prucnal, Bhavin J. Shastri, Malvin Carl Teich
Reservoir computing represents a subset of neural network models. In this framework, a fixed, recurrent network of nonlinear nodes performs a diversity of computations, from which linear classifiers extract the most useful information to perform a given algorithm. These systems maintain many of the advantages of neural networks, including adaptability and robustness to noise. In a hardware context, reservoirs require far fewer tunable elements than traditional neural network models to run effectively. Even simple physical systems can represent more complex virtual networks, and thereby perform a variety of complex tasks. Over the past several years, reservoir computers have been constructed that exploit the incredible bandwidths and speeds available to photonic signals. These ‘photonic reservoirs’ utilized optical multiplexing strategies to form highly complex virtual networks. Experimentally demonstrated systems have displayed state-of-the-art performance in a variety of areas, including speech recognition, time-series prediction, Boolean logic operations, and nonlinear channel equalization. In this chapter, we review the recent progress and achievements of this field.
Uploading Consciousness—The Evolution of Conscious Machines of the Future
Published in Anirban Bandyopadhyay, Nanobrain, 2020
Reservoir computing: Reservoir computing is a framework for computation like a neural network. Typically, an input signal is fed into a fixed (random) dynamical system called reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that the training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines and echo state networks are two major types of reservoir computing.
On developing theory of reservoir computing for sensing applications: the state weaving environment echo tracker (SWEET) algorithm
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
A reservoir computing setup features two key elements, a dynamical system that can respond to inputs, a reservoir, and a readout layer that is used to analyze the state of the system. To be used this way, the reservoir should have a set of rather generic properties, the most important ones being the echo state property and the input separation property. In somewhat simplified terms, the reservoir transforms the input into the internal states of the dynamical systems, and to compute unambiguously, different inputs need to be mapped to different states. The echo state property ensures that the initial state of the reservoir does not influence the result of the computation, which allows for on-line computation. The key insight regarding this type of information processing is that the transformation from the input into the internal state of the system is in fact the computation performed by the system. The readout layer is used only to access the result of the desired computation. It should possess the approximation property, i.e. be able to approximate any multi-variable function (on the set of states) to any desired degree of accuracy. However, if the reservoir per se is complex enough, then the readout layer can be very simple, e.g. even a linear readout might suffice.
KNOWM memristors in a bridge synapse delay-based reservoir computing system for detection of epileptic seizures
Published in International Journal of Parallel, Emergent and Distributed Systems, 2022
Dawid Przyczyna, Grzegorz Hess, Konrad Szaciłowski
Many scientists conduct intensive research that draws inspiration from biological neural structures in order to achieve more efficient computational structures, potentially of a universal nature. The high popularity is evident by an increasing number of publications appearing each year, scientific journals, and numerous conference meetings related to this subject. Interested readers are referred to recent reviews on the subject [27–30]. As can be seen, neuromorphic computing appears to be moving towards ‘conventional’ computing, or at least towards special purpose computing modules [31]. Recurrent neural networks, which are especially good at representing dynamics of given input due to feedback loops present in the system, exhibit problems with costly learning process. To solve this problem, Jäger and Maas independently proposed the Echo State Network (ESN) [32] and the Liquid State Machine (LSM) approaches [33]. In their constructs, in contract with artificial neural networks, the information processing layer is not to be trained, only the readout layer is subjected to training procedures. Thus, they suggested the importance of the multidimensional, rich, and dynamic state space of the information processing layer [26,34]. Over time, both of these approaches to efficient training of recurrent neural networks were incorporated into a common conceptual framework named Reservoir Computing (RC) and the information processing layer was named the ‘reservoir’ [35,36]. The reservoir in the RC paradigm describes a computational substrate capable of representing various inputs in a multidimensional configuration space of states, where computation is represented as a trajectory between successive states of the system in this space. Hence, as a proof of concept, RC has been implemented in simple systems as a bucket of water, where the data set with human speech was encoded as a series of water splashes [37]. Pictures of the perturbed water surface were used as basis for classification tasks. In our previous work we have shown that RC computing systems can be implemented even in such primitive setups as a doped cement and successfully used to classify simple signal according to its shape [38].