Explore chapters and articles related to this topic
GMDH Neural Networks
Published in Bogdan M. Wilamowski, J. David Irwin, Intelligent Systems, 2018
Marcin Mrugalski, Józef Korbicz
One of the most popular nonlinear system identification approaches is based on the application of artificial neural networks (ANNs) [3,11,21]. These can be most adequately characterized as computational models with particular properties such as generalization abilities, the ability to learn, parallel data processing, and good approximation of nonlinear systems. However, ANNs, despite the small number of assumptions in comparison to analytical methods [4,9,15,22], still require a significant amount of a priori information about the model’s structure. Moreover, there are no efficient algorithms for selecting structures of classical ANNs, and hence many experiments should be carried out to obtain an appropriate configuration. Experts should decide on the quality and quantity of inputs, the number of layers and neurons, as well as the form of their activation function. The heuristic approach that follows the determination of the network architecture corresponds to a subjective choice of the final model, which, in the majority of cases, will not approximate with the required quality.
Linear Least-Squared Error Modeling
Published in David C. Swanson, ®, 2011
The term “block least squares” is used to depict the idea that a least-squared error solution is calculated on a block of input and output data defined by the N samples of the observation window. Obviously, the larger the observation window, the better the model results will be, assuming that some level of noise interference is unavoidable. However, if the unknown system being modeled is not a linear function of the filter coefficients H, the error surface depicted in Figure 9.2 would have more than one minimum. Nonlinear system identification is beyond the scope of this book. In addition, it is not clear from the above development what affect the chosen model order M has on the results of the least-squared error fit. If the unknown system is linear and FIR in structure, the squared error will decline as model order is increased approaching the correct model order. Choosing a model order higher than the unknown FIR system will not reduce the squared error further (the higher-order coefficients are computed at or near zero). If the unknown system is linear and IIR in structure, the squared error will continue to improve as model order is increased.
Targeted Use of Deep Learning for Physics and Engineering
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Steven L. Brunton, J. Nathan Kutz
It is important to note that nonlinear system identification using techniques from machine learning has a rich history. For example, Bongard and Lipson [12] and Schmidt and Lipson [111] used genetic programming to obtain symbolic representations of nonlinear systems. They leveraged the principle of parsimony to pick the simplest model that describe the data out of a large family of candidate models. In subsequent work, the sparse identification of nonlinear dynamics (SINDy) algorithm [19] formulated the nonlinear system identification problem as a generalized linear regression problem, where the time derivative of a state is approximated as a linear combination of functions from a library of candidate terms that may describe the dynamics. In SINDy, sparsity-promoting regression is used to find the key terms in this candidate library that are required to describe the data, highlighting the few terms that are present in the dynamics. SINDy has since been generalized to identify partial differential equations [107], stochastic systems [13, 26], rational functions [78] and hybrid dynamics [77], to identify conservative models of fluid flows [53, 72, 73] and plasmas [58], and for use in model predictive control [57]. Importantly, these nonlinear system identification techniques result in models that are highly interpretable, because they have only a few terms and may be analyzed using classical techniques, and that are highly generalizable, because they avoid overfitting to the training data. In contrast, most neural network approaches, such as the feedforward architectures in Figure 2.1 and the recurrent reservoir network in Figure 2.2, are opaque, defying straightforward analysis and interpretations. Moreover, these networks are typically useful for interpolation tasks, where they are able to memorize the training data, and often fail in extrapolation tasks, where they are required to generalize beyond the training data. In many applications of computer vision, the training data is so vast that interpolation is sufficient. However, when characterizing physical and engineering systems, the goal is often to build a model from limited training data, often from a limited sampling of parameter space, and have it generalize to other parameters. For example, if the model is to be used for design optimization and control, it must be valid outside of the narrow region of parameter space used to train the model. Thus, in recent years, there is a concerted effort to combine parsimonious nonlinear system identification techniques with the more modern deep learning approaches, improving the interpretability and generalizability of neural networks.
An improved version of conditioned time and frequency domain reverse path methods for nonlinear parameter estimation of MDOF systems
Published in Mechanics Based Design of Structures and Machines, 2023
Machine learning techniques are also employed for nonlinear system identification. Ondra, Sever, and Schwingshackl 2017 has developed a method for identification of structural nonlinearities using the Hilbert transform and neural networks. The nonlinear indicators developed based on the deviation between HT of FRF and FRF by them is used as input features for nonlinear system identification in their research work. Similarly, the authors (Prawin, Rao, and Sethi 2020) have proposed a meta-SVM model for both localization and parameter estimation for systems with multiple disproportional varied types of nonlinearity where the FRF is used as the input feature in the SVM based identification technique. These ANN and SVM based nonlinear system identification techniques even though works faster irrespective of the complexity of the nonlinearities present in the system, they do not have any mathematical model. Further, these techniques require the generation of complete training data encompassing all possible nonlinearities with all associated complexities, spatial locations of nonlinearities. Incomplete information related to training data may result in erroneous identification results. Generalization is another problem with these ANN and SVM methods as mapping is input-dependent i.e., the specific type of excitation force, informed amplitude range of excitation, a specific form of nonlinearity for each nonlinearity type etc.
Input-to-state stability for system identification with continuous-time Runge–Kutta neural networks
Published in International Journal of Control, 2023
Jonas Weigand, Michael Deflorian, Martin Ruskowski
Nonlinear system identification is a core discipline in control engineering (e.g. Nelles, 2001; Sjoberg et al., 1995). For more than 30 years, neural networks are developed and widely applied for the identification of nonlinear dynamic systems (e.g. Ahmed et al., 2010; Chen et al., 1990; Deflorian et al., 2010; Haykin, 1990; Narendra & Parthasarathy, 1990; Weigand et al., 2020; Williams & Zipser, 1989). Many contributions focus on discrete-time neural networks. As pointed out by Y. J. Wang and Lin (1998), there are several problems in using discrete-time neural networks: larger approximation errors are induced by the assumed first-order discretisation, the long-term prediction accuracy is usually upgradable and the neural network can only predict the system behaviour at fixed time intervals. A variable sample time1 in training and application leads to complete model failure. This is mainly because feed forward networks and recurrent neural networks tend to learn the system states instead of the change rates of system states. Thereby the discrete sample time is inseparably connected to the neural network weights.
Enhanced algorithm for randomised model structure selection
Published in International Journal of Systems Science, 2022
L. P. Fagundes, A. S. Morais, L. C. Oliveira-Lopes, J. S. Morais
Nonlinear system identification has been studied over the past decades due to the fact that most of the real-life systems of interest are nonlinear. In this nonlinear context, the Nonlinear AutoRegressive with eXogenous Inputs (NARX) models, introduced by Leontaritis and Billings (Leontaritis & Billings, 1985a, 1985b), play an important role and have many applications in literature, such as seismic behaviour of buttress dams (Palumbo & Piroddi, 2001), forecasting air pollution (Pisoni et al., 2009), chemical processes (Dong, 2019), predicting gas emission in diesel engines (Alcan et al., 2019), among others.