Explore chapters and articles related to this topic
Dynamic Neural Networks and Optimal Signal Processing
Published in Yu Hen Hu, Jenq-Neng Hwang, Handbook of Neural Network Signal Processing, 2018
The problem concerning the choice of bases is the least addressed in the signal processing literature. Although we agree that, under a linear signal model, the bases are somewhat predetermined, they are a key player in nonlinear signal processing. This is the reason we emphasize the choice of bases in functional spaces and show how the bank of linear filters followed by static nonlinearities creates networks that are universal approximators for the class of nonlinear functions with approximately finite memory. This type of network is called a dynamic neural network and is illustrated by the time delay neural network (TDNN). Unfortunately, the proofs are existence proofs, which means that the designer still needs to decide parsimonious architectures to achieve good results that are insensitive to noise and generalize well. In this context, this chapter also treats a set of delay operators that generalize the ideal delay operator almost exclusively utilized in digital signal processing. Due to the breadth of topics covered and the limited space, we omit most problems related to training networks and the selection of model order.
Multimedia Telephone for Hearing-Impaired People
Published in Lakhmi C. Jain, N.M. Martin, Fusion of Neural Networks, Fuzzy Systems, and Genetic Algorithms, 2020
The Time-Delay Neural Network (TDNN), first proposed by Waibel and subsequently used successfully in the field of phonetic recognition, is naturally suited for the solution of this problem. In contrast to conventional neurons which provide their response to the weighted sum of the current inputs, the TDNN also extends the sum to a finite number of past inputs (neuron delay or memory). In this way the output provided by a given layer depends on the output of the previous layers computed over an extended domain (in our case the time domain) of input values. The particular structure of a TDNN also allows the extension of the classical back-propagation algorithm and its complexity optimization.
Classifying unevenly spaced clinical time series data using forecast error approximation based bottom-up (FeAB) segmented time delay neural network
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
Y. Nancy Jane, H. Khanna Nehemiah, Arputharaj Kannan
Time Delay Neural Network (TDNN) is a type of feed-forward neural network with tapped delay lines for the inputs. TDNN was introduced by Waibel et al. (1989) for identifying the temporal relationships among the acoustic-phonetic features. TDNN unfolds the inputs in different time points and interprets the temporal sequence. In this work, TDNN is used to build a classification model for clinical time series data. Figure 3 shows the network structure of TDNN used in the proposed framework.
Time series relations between parking garage occupancy and traffic speed in macroscopic downtown areas – a data driven study
Published in Journal of Intelligent Transportation Systems, 2021
Rui Ma, Shenyang Chen, H. Michael Zhang
TDNN is a special extension of the classic feed forward neural network, which typically consists of one input layer, one hidden layer, and one output layer. The significant difference between TDNN and general feed forward neural networks is that TDNN manages sequential data by introducing temporal processing units, namely delay tabs. Such units allow the inputs in multiple time steps rather than in a single time step.