Explore chapters and articles related to this topic
Classification task
Published in Benny Raphael, Construction and Building Automation, 2023
The dual form is a quadratic optimization problem subject to linear constraints. For such problems, it can be shown that the local optimum is the same as the global optimum. That is, multiple local optima are not present. This permits the use of local search techniques which are very efficient, and you are guaranteed to obtain the globally optimal solution. The simplest method is gradient descent since the dual form is easily differentiable. However, the equality constraint resulting from the KKT condition (Eq 9.33) should be satisfied. That is, each step in the downhill direction should be taken such that the sum of Lagrange multipliers for the positive points should be equal to the sum for negative points. Furthermore, since there are as many optimization variables as the number of data points, the sparseness of the solution should be effectively exploited to solve large problems involving thousands of data points. An efficient algorithm that makes use of these properties is Sequential Minimal Optimization (SMO). See Cristianini and Taylor (2000) for details.
Prototype optimization
Published in Fuewen Frank Liou, Rapid Prototyping and Engineering Applications, 2019
As shown in Figure 10.2, there is only a global optimal (maximum or minimum) solution for a function, and there may be several local optima. Therefore, a global maximum (minimum) is also a local maximum (minimum) solution, but a local maximum (minimum) may not necessarily be a global maximum (minimum) solution. Although in most cases, a global optimum is preferred, in some cases, it is not feasible to prove that a locally optimal solution exists. A point is Pareto optimal if the only way to improve any of its components is by worsening other components. It is actually the definition of a local optimal and is a solution to a modeled design problem. In case a Pareto optimal is found, but it is not possible to validate that it is the global optimal, then a decision needs to be made to see if more solutions need to be found. The previous assessment criteria, as well as the available resources to find additional solutions, will be used to make a decision. For engineering applications, sometimes a Pareto optimal that is feasible and reasonable is a sufficient solution. Sometimes, several Pareto optimal solutions are needed to find a solution, which is closer to the global optimal.
Genetic Algorithms
Published in Anand Nayyar, Dac-Nhuong Le, Nhu Gia Nguyen, Advances in Swarm Intelligence for Optimizing Problems in Computer Science, 2018
Sandeep Kumar, Sanjay Jain, Harish Sharma
Global optimization methods concentrate on finding the best solution from all the local optima. It is a very tedious task to design a global optimization technique because there is no specific process or algorithm available for this designing process, and the criteria for optimum results are also not fixed. The literature contains a large number of heuristics and meta-heuristics to solve non-linear optimization problems. The approaches that presently exist in the literature for solving non-linear global optimization problems might be roughly categorized into two classes: deterministic and probabilistic methods. The deterministic strategies give us a guarantee of the global optimum. Probabilistic methods don’t give us a guarantee of optimum, but they provide the nearest to the optimum solution. This is attained by supposing that the better solutions are in proximity to the solution search space, and this is true for most of the real-world problems. These probabilistic methods use random components for fluctuations and are also referred to as stochastic algorithms. A balanced approach between exploration of search space and exploitation of best feasible solutions found so far is considered to be most successful in this class.
Quasi-Oppositional African Vultures Optimization-Based PIλDn Plus PIλ Controller for Frequency Control of an Interlinked Hybrid Power System
Published in Electric Power Components and Systems, 2023
Tejavath Veerendar, Deepak Kumar, Akhilesh Kumar Gupta
Therefore, while designing an effective LFC strategy, the critical challenges to handling the afore mentioned issues are The considered LFC should be able to provide improved dynamic performance for the RERs and DGs incorporated test systems in terms of transient and steady-state characteristics.It should exhibit robust performance under non-linearities and parametric uncertainties of the system.If an optimization-based controller is employed, selecting a suitable optimization technique for tuning the controller’s parameters is critical. Further, the considered algorithm must have a faster convergence speed. The chosen optimization must avoid the local optima and premature convergence.
A Comprehensive Review on Stochastic Optimal Power Flow Problems and Solution Methodologies
Published in IETE Technical Review, 2023
Ankur Maheshwari, Yog Raj Sood, Supriya Jaiswal
Feasible solution sets can generally be classified into three categories, as illustrated in Figure 2. Local optimum: A local optimum is a solution that is the best in a particular region of the search space but may not be the global best solution. In other words, the algorithm has found a solution optimal for a local area but not necessarily for the entire search space.Sub-optimum: A sub-optimum solution is a solution that is not optimal but still provides some improvement over the initial conditions. This solution is not necessarily the best possible solution, but it may be a good approximation of the optimal solution.Global optimum: Global optimum is the best possible solution for the entire search space. It is the solution that optimizes the objective function for all possible values of the decision variables. Finding the global optimum is often the primary goal of optimization problems.
Resnet-Unet considering Patches (RUP) network to solve the problem of patches due to shadows in extracting building top information
Published in Journal of Spatial Science, 2023
Dongliang Yang, Zichen Liu, Dejun Feng, Yakun Xie, Xudong Song, Ziqin Feng
The training process of the convolutional neural network model is also the process of updating model parameters. Adam algorithm and stochastic gradient descent algorithm are commonly used in parameter optimisation algorithms in the training process. (1) Adam algorithm (Kingma and Ba 2014), also known as the adaptive moment estimation optimiser, combines the first-order moment estimation and second-order moment estimation of the gradient to calculate the updated step size. The algorithm has high computational efficiency and low memory requirement, and each iterative learning rate after correction is located in a certain interval so that the parameters are relatively stable.(2) SGD algorithm (Fjellström et al. 2022) uses only one randomly selected data to obtain the ‘gradient’ and updates the weight parameters with the obtained ‘gradient’. Its calculation speed is fast, but the high-frequency parameter update may cause the objective function value oscillation. It is easy to converge to the local optimum, but it may be trapped in the saddle point in some cases. Besides, the update direction is completely dependent on the current batch, making the algorithm very unstable.