Explore chapters and articles related to this topic
DEM modeling of agricultural processes: an overview of recent projects
Published in Heinz Konietzky, Numerical Modeling in Micromechanics via Particle Methods, 2017
E. Tijskens, M. Van Zeebroeck, H. Ramon, J. De Baerdemaeker, T. Van Canneyt
A typical implementation of Equations 11 and 13 will generally require the user to provide not only the functions fc and gd, but also their gradients and Hessians. This means that the user has to code and debug 20 functions (2 functions, 2 × 3 first order partial derivatives, and 2 × 6 second order partial derivatives) rather than just 2. Although trivial, this procedure is error-prone and time-consuming, and in fact also not necessary at all. Several techniques exist to obtain partial derivatives from function specifications alone. Finite differencing, e.g. is a widely known one, but is subject to numerical error and expensive when second order partial derivatives are needed (Press & Teukolsky 1991). A less well known technique is automatic differentiation (AD) (Griewank 2000, Corliss et al. 2002). Because differentiation is a mechanistic process, AD is capable of augmenting code representing a function specification with code for evaluating partial derivatives. The obtained partial derivatives are accurate to within machine precision. An implementation based on operator overloading (Tijskens et al. 2002a, Tijskens et al., in press, Tijskens et al., submitted) applied to the nonlinear equations above (Eqs. 11 and 13), is described in (Tijskens et al. 2002b).
Path Planning and Collision Avoidance
Published in Marina Indri, Roberto Oboe, Mechatronics and Robotics, 2020
Hubert Gattringer, Andreas Müller, Klemens Springer
It is important to mention that the first two toolkits generate the required derivatives based on algorithmic differentiation (AD). AD tools decompose the function of interest into elementary operations and consequently, can also deliver derivative information [15]. For AD, there exist two different methods of implementation: operator overloading (see, e.g., the tools ADOL-C (utilized by MUSCOD-II) or CppAD) and source code transformation (see, e.g., the tools ADIC or CasADi).
Using automatic differentiation for compressive sensing in uncertainty quantification
Published in Optimization Methods and Software, 2018
Mu Wang, Guang Lin, Alex Pothen
AD is a technique that augments a computer program to evaluate the objective function as well as its gradient (and higher order derivatives) [9]. AD decomposes the original program into a sequence of elemental function evaluations that reflects how the function is computed by using the independent variables and intermediate variables. The chain rule of calculus is applied to this sequence of elemental functions to compute derivatives. The major benefit of AD is that it evaluates the derivatives to machine precision accuracy with only a small constant factor of overhead relative to evaluating the objective function itself. AD has two major modes: forward mode and reverse mode. The forward mode of AD evaluates the directional derivative of the objective function; the reverse mode of AD evaluates the gradient of a scalar objective function. More details about AD can be found in the textbooks [9,13]; here we use it as an external tool. For this problem, the independent variables are the random variables . The only dependent variable is the QoI . So, the reverse mode of AD can evaluate the gradient vector within a constant factor of the cost of the QoI evaluation, with the constant bounded by three [9,13].
A benchmark of selected algorithmic differentiation tools on some problems in computer vision and machine learning
Published in Optimization Methods and Software, 2018
Filip Srajer, Zuzana Kukelova, Andrew Fitzgibbon
Algorithmic differentiation (AD) is a set of methods for automatic and exact computation of derivatives given a definition, in source code, of a function to be differentiated. It includes automatic differentiation, where derivatives are forward and/or back propagated through the chain rule. This is possible since even the most complicated functions are composed of elementary operations and functions such as addition, multiplication, logarithm and exponential.
Vertical trajectory planning: an optimal control approach for active suspension systems in autonomous vehicles
Published in Vehicle System Dynamics, 2022
The optimization problem is solved via the gradient-based solver IPOPT [38]. IPOPT is an interior point solver and commonly used for large-scale problems like the one in this paper. Reasons for choosing this were good performance for nonconvex problems, the fact, that it was successfully used by Gundlach for the use case of a time-optimal trajectory planner [34] and easy availability through its open source license. The whole problem is formulated towards an efficient implementation in IPOPT, but it should be also applicable to other algorithms like BARON [39] or fmincon() provided by Matlab. For a seamless integration into existing development environments for chassis control, which commonly use Matlab Simulink, the implementation is done in Matlab and the optimiser is called through the Matlab interface from Bertolazzi [40]. The optimiser requires both jacobian and hessian of constraints and cost function, which can be determined through different methods. A quasi-newton method to calculate numerical derivatives is integrated in IPOPT. However, this does not take constant elements of the matrices into account and calculation time increases drastically with problem size. Another common method is the use of algorithmic differentiation (AD), e.g. offered by CasADi [41], which is much more efficient than the numerical method. AD algorithms use common rules of derivation on source code level to calculate the exact derivatives. The most efficient way to implement gradient and hessian is analytical calculation ‘by hand’. This way also ensures a maximum of transparency as derivatives can be obtained directly from the code. Due to the need of real-time capability for a vehicle integration, the additional effort is taken and derivatives are calculated manually. These are then compared to the results of the quasi-newton method of IPOPT to confirm validity. In Matlab on a Notebook with an Intel i7-6820 @ 2.7 GHz calculation time did not exceed 100 ms for a problem size of N = 96 with discretization of 1.5–2 m.