Explore chapters and articles related to this topic
The fractional Laplacian in two dimensions
Published in C. Pozrikidis, The Fractional Laplacian, 2018
subject to the following definitions: The double sum is over a range of interest corresponding to Ω.The prime after the sums indicates omission of the singular terms arising when m = i and n = j.∇2 is the ordinary Laplacian operator, equal to the sum of the second partial derivatives with respect to x and y.The length ϱ is the effective radius of a circular disc whose area is the same as that of a cell, πϱ2=Ac.
Groundwater
Published in Stephen A. Thompson, Hydrology for Water Management, 2017
This is precisely how the hydraulic gradient was approximated in Equation (8.11). The curly derivative symbol (∂) represents a partial derivative. A partial derivative is used when a variable (in this case h) changes in more than one direction at the same time. Partial derivatives are interpreted just like ordinary derivatives. The partial derivative in Equation (8.38) means the rate of change in head h, with respect to direction x, when the rates of change in head in all other directions are held constant. An approximation of the second-order partial derivative of h with respect to x is found by ‘differencing’ Equation (8.38): () ∂2h∂x2≈[(hi+1,j−hi,j)/Δx]−[(hi,j−hi−1,j)/Δx]Δx
Fundamental Concepts of Optimization
Published in Pedro Ponce Cruz, Arturo Molina Gutiérrez, Ricardo A. Ramírez-Mendoza, Efraín Méndez Flores, Alexandro Antonio Ortiz Espinoza, David Christopher Balderas Silva, ®, 2020
Pedro Ponce Cruz, Arturo Molina Gutiérrez, Ricardo A. Ramírez-Mendoza, Efraín Méndez Flores, Alexandro Antonio Ortiz Espinoza, David Christopher Balderas Silva
A critical point for a multivariable function f (x, y) is a point f (x0 + y0) in its domain where f is differentiable with fx(x0 + y0) = 0. When the function is twice differentiable at point x, the Hessian matrix is defined by second-order partial derivatives.
A quantitative and covariational reasoning investigation of students’ interpretations of partial derivatives in different contexts
Published in International Journal of Mathematical Education in Science and Technology, 2023
Several studies have reported on student reasoning about several calculus topics at the undergraduate level. However, much of this research has focused on topics in univariate calculus and has not examined student reasoning about topics in multivariable calculus (e.g. Henderson & Britton, 2013; Rasmussen et al., 2014). Student reasoning about ordinary derivatives is one area that has received much attention in the research literature (cf., Asiala et al., 1997; Beichner, 1994; Berry & Nyman, 2003; Christensen & Thompson, 2012; Flynn et al., 2018; Habre & Abboud, 2006; Jones, 2017; Siyepu, 2013; Zandieh, 2000). An ordinary derivative (denoted by ) describes how a real-valued function of a single independent variable changes with respect to the independent variable. A few studies have examined student reasoning about partial derivatives (cf., Bajracharya et al., 2019; Becker & Towns, 2012; Martínez-Planell et al., 2015; Thompson et al., 2012). A partial derivative (denoted by ) describes how a real-valued function of two or more independent variables changes with respect to one of the independent variables by treating the other independent variable(s) as constants.
Quantitative and covariational reasoning opportunities provided by calculus textbooks: the case of the derivative
Published in International Journal of Mathematical Education in Science and Technology, 2022
The role of mathematics textbooks as an opportunity to learn mathematics is well documented in the research literature on the learning of K-12 mathematics. Some of this research has focused on students’ opportunities to learn topics such as linear functions and trigonometry (e.g. Wijaya et al., 2015), addition and subtraction of fractions (e.g. Alajmi, 2012; Charalambous et al., 2010), probability (e.g. Jones & Tarr, 2007), statistics (e.g. Pickle, 2012), reasoning and proof (e.g. Stylianides, 2009; Thompson et al., 2012), proportional reasoning (e.g. Dole & Shield, 2008), and deductive reasoning (e.g. Stacey & Vincent, 2009). Research on the opportunity to learn about the aforementioned topics or other topics provided by undergraduate mathematics textbooks is scarce, a gap in knowledge the present study seeks to address. To be specific, the present study examined the opportunity to learn about derivatives (ordinary derivatives and partial derivatives) provided by commonly used textbooks in the teaching of calculus at the undergraduate level in the United States. An ordinary derivative (denoted by ) describes how a real-valued function of a single independent variable changes with respect to the independent variable. A partial derivative (denoted by ) describes how a real-valued function of two or more independent variables changes with respect to one of the independent variables by treating the other independent variable(s) as a constant.
On prediction of slope failure time with the inverse velocity method
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2023
Jie Zhang, Hong-zeng Yao, Zi-peng Wang, Ya-dong Xue, Lu-lu Zhang
The Hessian matrix is a square matrix composed of the second-order partial derivatives of the objective function, describing the local curvature of the function (Brown 2014). In linear algebra, for a matrix A, the condition number κ(A) can be calculated as follows: where λmax(A) and λmin(A) represent the maximum and minimum eigenvalues of the matrix A, respectively. When κ(A) is relatively small, the matrix A is called well-conditioned matrix, otherwise it is called ill-conditioned matrix (e.g. Ciarlet 1989). According to Equation (8), the condition number of the Hessian matrix measures how much the curvature differs in each direction. A relatively large condition number implies λmax being far larger than λmin, indicating that the curvature of the objective function in some directions is far greater than that in other directions. In such a case, there exists a relatively flat region as the function changes much more slowly along some directions than it does along other directions (Pascanu et al. 2014). Once the search step of optimisation falls into the flat region, the iteration may cease because the large condition number can cause the loss of information through rounding errors so that the value of the objective function is regarded as a constant in the flat region. If the flat region is large, the search step from the initial point may easily fall into the flat region, making the termination point of the optimisation algorithm sensitive to the initial value.