Explore chapters and articles related to this topic
Online optimization algorithms
Published in Xiaobiao Huang, Beam-based Correction and Optimization for Accelerators, 2019
Online optimization is similar to ordinary mathematical optimization in that it looks for the maximum or minimum of the objective function(s) within a certain parameter space. We consider the minimization problem only as any maximization problem can be turned into minimization by flipping the sign of the objective function. Unlike the ordinary cases, the objective function is not given in an analytic form or calculated through a computer program. Instead, the function is evaluated through measurements on a machine. The relevant operating conditions of the machine are controlled through the input variables; all other conditions either have no impact to the objective function or remain unchanged during the course of the optimization. In other words, a set of input variables uniquely determine the values the objective functions, apart from the inevitable random measurement errors. Knowing the working principle of the machine is not essential to online optimization. The system to be optimized can be considered as a black-box, as illustrated in Figure 7.1.
Dynamic Profiling and Optimization Methodologies for Sensor Networks
Published in Ioanis Nikolaidis, Krzysztof Iniewski, Building Sensor Networks, 2017
Ann Gordon-Ross, Arslan Munir, Susan Lysecky, Roman Lysecky, Ashish Shenoy, Jeff Hiner
For comparison purposes, we implemented a simulated annealing (SA)–based algorithm, our greedy online optimization algorithm (GD) (which leverages intelligent initial parameter-value selection, exploration ordering, and parameter arrangement), and several other greedy online algorithm variations (Table 1.2) in C/C++. First, we evaluated our methodology and algorithms with a desktop implementation to reduce the analysis time and study the feasibility of our algorithms for a dynamic environment. We compared our results with SA to provide relative comparisons of greedy algorithms with another heuristic algorithm. We compared GD results with different greedy-algorithm variations (Table 1.2) to provide an insight into how initial parameter-value settings, exploration ordering, and parameter arrangement affect the final operating state quality. We normalized the objective-function value (corresponding to the operating state) attained by the algorithms with respect to the optimal solution obtained using an exhaustive search. We analyzed the relative complexity of the algorithms by measuring the execution time and data memory requirements. Note that for brevity, our presented results are a subset of the greedy algorithms listed in Table 1.2, the application domains, and the design spaces. However, we evaluated all the greedy algorithms and application domains, and the subset presented in this section is representative of all greedy algorithms and application-domain trends and characteristics.
Active defect discovery: A human-in-the-loop learning method
Published in IISE Transactions, 2023
The procedure described above falls under the framework of online optimization. Online optimization is an iterative game against a potential adversary where are selected from some constrained sets The game proceeds as follows: (i) The adversary selects a function (ii) Suffer a loss (iii) Select a vector In our problem, the adversary is the greedy policy to select the top-ranked instance. Typically, the performance of online optimization is measured by the accumulated Regret for the best that is the minimizer by assuming ft’s are known in advance.