Explore chapters and articles related to this topic
Combining System Modeling and Machine Learning into Hybrid Ecosystem Modeling
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Markus Reichstein, Bernhard Ahrens, Basil Kraft, Gustau Camps-Valls, Nuno Carvalhais, Fabian Gans, Pierre Gentine, Alexander J. Winkler
For hybrid models, reconciling observations of fluxes and pools into a consistent framework that can extract scientific insights is currently a challenge. For universal differential equations, mostly differential equations without exogenous forcing have been explored (Rackauckas et al. 2020). Here, we have shown an example of a universal differential equation with exogenous forcing (temperature and moisture). It became evident that efficient automatic differentiation can become a computational bottleneck. Decomposing or “denoising” the exogenous forcing into a combination of automatically differentiable functions (sines and cosines) with Fourier transform could be a path worthwhile exploring for universal differential equations. This approach might also be helpful to prevent spectral bias in learning neural network, which generally learn slow frequencies quicker (Rahaman et al. 2019).
Sensitivity Analysis
Published in Charles Yoe, Principles of Risk Analysis, 2019
The automatic differentiation (AD) technique is a mathematical method that relies on the use of partial derivatives of outputs with respect to small changes in sensitive inputs to calculate local sensitivities. It is usually reserved for larger models. Because these models are not always well-behaved systems of equations, differentiation often relies on numerical techniques that can be time consuming and difficult to calculate. In addition, they may yield inaccurate results. What makes mathematical AD techniques different is that they rely on precompilers that analyze the code of the model and then compile algorithms for computing first or higher order derivatives in an efficient and accurate manner.
Machine Learning - A gentle introduction
Published in Nailong Zhang, A Tour of Data Science, 2020
It is worth distinguishing numerical differentiation, symbolic differentiation and automatic differentiation. Symbolic differentiation30 aims at finding the derivative of a given formula with respect to a specified variable. It takes a symbolic formula as input and also returns a symbolic formula. For example, by using symbolic differentiation we could simplify ∂(x2 + y2)/∂x to 2x. Numerical differentiation is often used when the expression/formula of the function is unknown (or in a black box, for example, a computer program). Compared with symbolic differentiation, numerical differentiation and automatic differentiation try to evaluate the derivative of a function at a fixed point. For example, we may want to know the value of ∂f(x, y)/∂x when x = x0; y = y0. The most basic formula for numerical differentiation is limh→0(f(x + h)) − f(x))/h. In implementation, this limit could be approximated by setting h to a very small value, for example 1e − 6. Automatic differentiation also aims at finding the numerical values of derivatives, in a more efficient approach based on chain rule. However, unlike the numerical differentiation, automatic differentiation requires the exact function formula/structure. Also, automatic differentiation does not need the approximation for the limit which is done in numerical differentiation. At the lowest level, it evaluates the derivatives based on symbolic rules. Thus automatic differentiation is partly symbolic and partly numerical [2].
Physics-Informed Neural Network with Fourier Features for Radiation Transport in Heterogeneous Media
Published in Nuclear Science and Engineering, 2023
Quincy A. Huhn, Mauricio E. Tano, Jean C. Ragusa
PiNNs require a method for calculating the residual of the PDEs under consideration. To do so, PiNNs leverage automatic differentiation to calculate the partial derivatives found in the expression of the PDE. Automatic differentiation is also used in the training of traditional data-driven feedforward networks. This section will give a brief view of automatic differentiation; see Ref. 18 for additional details. The specific type of automatic differentiation used is reverse accumulation, which is composed of two steps: forward pass and backpropagation. The forward pass is simply an evaluation of the neural network where values for each neuron are saved to be used in the second step. When evaluating the forward pass for PiNNs, the values of the input must be saved as well. Once all of the relevant values are saved, it is then possible to perform a backpropagation stage.
Generalized derivatives of computer programs
Published in Optimization Methods and Software, 2022
Matthew R. Billingsley, Paul I. Barton
Automatic differentiation refers to a collection of methods for which derivatives, or directional derivatives, of a function can be calculated without needing to perform symbolic differentiation or use inexact approximations of derivatives [10]. In Khan and Barton [17], the vector forward mode of automatic differentiation is extended to a new method for calculating LD-derivatives. Two common methods for implementing automatic differentiation are operator overloading, where expressions and derivatives are evaluated simultaneously by overloading the operator during program execution, and source transformation, where the source code is altered to include expression and derivative evaluation prior to program execution. The discussion in the remainder of this paper considers the latter.
On parameter estimation approaches for predicting disease transmission through optimization, deep learning and statistical inference methods
Published in Letters in Biomathematics, 2019
Maziar Raissi, Niloofar Ramezani, Padmanabhan Seshaiyer
It is worth emphasizing that automatic differentiation is different from, and in several respects superior to, numerical or symbolic differentiation – two commonly encountered techniques of computing derivatives. In its most basic description (Baydin et al., 2018), automatic differentiation relies on the fact that all numerical computations are ultimately compositions of a finite set of elementary operations for which derivatives are known. Combining the derivatives of the constituent operations through the chain rule gives the derivative of the overall composition. This allows accurate evaluation of derivatives at machine precision with ideal asymptotic efficiency and only a small constant factor of overhead. In particular, to compute the required derivatives we rely on Abadi et al. (2016), which is a popular and relatively well-documented open-source software library for automatic differentiation and deep learning computations. In TensorFlow, before a model is run, its computational graph is defined statically rather than dynamically as for instance in PyTorch (Paszke et al., 2017). This is an important feature as it allows us to create and compile the computational graph for the physics informed neural networks (9) only once and keep it fixed throughout the training procedure. This leads to significant reduction in the computational cost of the proposed framework.