Explore chapters and articles related to this topic
Triple Steps for Verifying Chemical Reaction Based on Deep Whale Optimization Algorithm (VCR-WOA)
Published in Amit Kumar Tyagi, Ajith Abraham, Recurrent Neural Networks, 2023
Samaher Al-Janabi, Ayad Alkaim, G. AL-Taleby
Finding an alternative under the defined constraints with the most cost-effective or feasible efficiency is achieved by maximizing desirable conditions and minimizing undesirable ones [13, 14]. Maximization involves trying to obtain the best or optimum effect or consequence without reference to risk or benefit. The exercise of optimization is constrained by the scarcity of sufficient knowledge, and the scarcity of time to determine what data is available. Problems of optimization may be split into two groups, based on whether the factors are continuous or discrete: An issue of continuous variables is defined as continuous optimization [15, 16], in which an optimum value needs to be derived from a continuous function. These can contain limited problems and multimodal problems [7, 17]. A problem of optimization of discrete variables is defined as a discrete optimization, in which an entity such as a number, permutation, or graph may be extracted from a countable set. In the next part we will talk about some of the well-known optimization techniques: multilevel coordinate search (MCS) and Whale optimization algorithm (WOA) [18].
Exploiting the Flexibility Value of Virtual Power Plants through Market Participation in Smart Energy Communities
Published in Ehsan Heydarian-Forushani, Hassan Haes Alhelou, Seifeddine Ben Elghali, Virtual Power Plant Solution for Future Smart Energy Communities, 2023
Georgios Skaltsis, Stylianos Zikos, Elpiniki Makri, Christos Timplalexis, Dimosthenis Ioannidis, Dimitrios Tzovaras
Many studies model the mathematical problem as a mixed-integer linear problem, through the fast execution time that it is provided. However, some authors formulate the VPP issue with nonlinear constraints, and due to nonlinearity, they apply several techniques trying to avoid the possibility of generating local optimal solutions. Regarding discrete optimization problems, a widely used algorithm is the branch-and-bound technique [16]. The branch-and-bound method recursively divides the search space into smaller spaces, called branches by using estimated bounds to limit the number of possible solutions. Nevertheless, the formed tree may become enormous; in that case, the available memory will be drained. Dynamic programming is another optimization method that breaks down a complex problem into a group of simpler sub-problems, solving them separately just once and then storing their solution. Therefore, the optimal solution will be a combination of sub-problems' outcomes.
Harmony Search
Published in Nazmul Siddique, Hojjat Adeli, Nature-Inspired Computing, 2017
When the problem under consideration consists of integer decision variables, a discrete optimization algorithm is preferred to handle the problem effectively. Askarzadeh (2013a,b) proposed a discrete variant of the HSA, which can effectively solve the optimization problem. Initially, Nh number of feasible solutions is generated for the HM. The new harmony is then produced by the following pseudocode: If (r1 > HMCR) xnew(k) = x′ ∈ {X}, ∀k = 1 to Nhxnew(k) = hm′ ∈ {HM} If (r2 < PAR) xnew(k) = xnew(k) + rwwhere xnew(k) is the improvised harmony, x′ ∈ X is randomly selected from the feasible subset of integer numbers {X}, hm′ ∈ HM is a value randomly selected from the HM, and r1 and r2 are uniformly distributed random numbers within the interval of [0,1]. The parameter rw is defined as follows: rw={1r3<0.5−1otherwise
The ASSISTANT project: AI for high level decisions in manufacturing
Published in International Journal of Production Research, 2023
G. Castañé, A. Dolgui, N. Kousi, B. Meyers, S. Thevenin, E. Vyhmeister, P-O. Östberg
Discrete optimisation relies on models that are typically executed by solvers, e.g. Mixed Integer Programming, Constraint Programming solvers. Technology independent modelling languages like MiniZinc (Nethercote et al. 2007) incorporate global constraints as building blocks (Beldiceanu, Carlsson, and Rampon 2012) to come up with concise models. While usually built by experts, a recent new line of research tends to acquire models from data (Bessiere et al. 2017; Kumar, Teso, and De Raedt 2019), where global constraints are used as a learning bias to acquire models from a limited amount of data (Beldiceanu et al. 2016).