Explore chapters and articles related to this topic
Data Conversion Process
Published in Michael Olorunfunmi Kolawole, Electronics, 2020
The accuracy of the SAR-ADC is determined by the accuracy of implementation of binary-scaled capacitances. The sum of the capacitances CN−1 + CN−2 + ⋯ + C0 defines the conversion range. The limitation of the charge-redistribution successive approximation ADC is their capacitors’ matching. Mismatch causes deviation from actual step width resulting in two errors: the integral nonlinearity (INL) and differential nonlinearity (DNL). The INL error is analogous to the linearity error of an amplifier, and is defined as the maximum deviation of the actual transfer characteristic of the converter from a straight line. The DNL error is the linearity between code transitions of the converter, and is a measure of the monotonicity of the converter. A converter is said to be monotonic if an increase in input values results in an increase in output values. Glitches can occur during changes in the output at major transitions. Both DAC and ADCs can be non-monotonic, but a more common result of excess DNL in ADCs is missing codes. Missing codes (or non-monotonicity) in an ADC are as objectionable as non-monotonicity in a DAC [11]. DAC is non-monotonic if its transfer characteristic contains one or more localized maxima or minima [12]. In many applications that require converters with closed-loop systems—meaning systems that utilize feedback—their non-monotonicity can change a negative feedback to a positive feedback. As such, it is critically important that DACs, especially, are monotonic.
Fourier Series
Published in Steven G. Krantz, Differential Equations, 2015
In fact, with a few more hypotheses, we may make the result even sharper. Recall that a function f is monotone increasing if x1 ≤ x2 implies f(x1) ≤ f(x2). The function is monotone decreasing if x1 ≤ x2 implies f(x1) ≥ f(x2). If the function is either monotone increasing or monotone decreasing then we just call it monotone. Now we have this result of Dirichlet:
Fourier Series: Basic Concepts
Published in Steven G. Krantz, Differential Equations, 2020
In fact, with a few more hypotheses, we may make the result even sharper. Recall that a function f is monotone increasing if x1 ≤ x2 implies f (x1) ≤ f (x2). The function is monotone decreasing if x1 ≤ x2 implies f (x1) ≥ f (x2). If the function is either monotone increasing or monotone decreasing then we just call it monotone. Now we have this result of Dirichlet:
Machine intelligence aware electricity theft detection for smart metering applications
Published in Waves in Random and Complex Media, 2023
Shoaib Munawar, Zeeshan Aslam Khan, Naveed Ishtiaq Chaudhary, Nadeem Javaid, Muhammad Asif Zahoor Raja
Sigmoid activation function is S shaped curve between 0 and 1. It is used for the probability of the output [33]. It is a differentiable function and monotonic, however, its derivative is not. It is a logistic function and stucks at the training time. To tackle such issue softmax function is used. Mathematically, it can be represented as [34]: Softmax is a relative probability-based activation function [33]. It uses the probability of all concatenated layers to deduce a cumulative output-based probability. These layers contain input layers and hidden layers. Mathematically, it is represented in Equation (3) [35]. Equation (4) shows mathematical representation of softmax where Z shows the values output layer neurons. The normalized output are then converted to probabilities.
On the uniqueness of non-reducible multi-player control problems
Published in Optimization Methods and Software, 2021
Problems of this type are arising during the process of solving (Pν) by applying a penalty or augmented Lagrange method. For the sake of simplicity we assume that , where and K denotes the cone of non-negative continuous functions. Defining it is again the convexity of the objective functional that allows us to characterize the solution of the NEP via controls that solve the variational inequality From Theorem 3.5 we know that the mapping is strongly monotone if Assumption 3.4 is satisfied. Furthermore, we know that the function is convex and its derivative is monotone. Hence, is strongly monotone and Theorem 3.5 can easily be adapted to that case.
A rewriting system for convex optimization problems
Published in Journal of Control and Decision, 2018
Akshay Agrawal, Robin Verschueren, Steven Diamond, Stephen Boyd
Monotone transformations of objective and constraints. Composing any monotone increasing function with the objective function of a problem yields an equivalent problem; so does transforming any number of constraints by applying any monotone increasing function to both sides. The retrieval method for this reduction is essentially a no-op, as the feasible and optimal sets for the two problems are identical. This reduction has been employed for centuries – squaring the Euclidean norm when it appears as an objective function to render it differentiable is, at least historically, standard mathematical practice.