Explore chapters and articles related to this topic
Design Flows
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC Implementation, Circuit Design, and Process Technology, 2018
Leon Stok, David Hathaway, Kurt Keutzer, David Chinnery
Lazy evaluation means that an incremental tool should try to the greatest extent possible to defer processing until the results are needed. This can save considerable processing time, which would otherwise be spent getting results that are never used. For example, consider a logic synthesis application making a series of individual disconnections and reconnections of pins to accomplish some logic transformation. If an incremental timing analyzer updates the timing results after each of these individual actions, it will end up recomputing time values many times for many or all of the same points in the design, while only the last values computed are actually used.
Automatic Code Parallelization and Optimization
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
pMapper uses lazy evaluation to collect as much information about the program as possible prior to performing any program optimization. A lazy evaluation approach delays any computation until it is absolutely necessary, such as when the result is required by the user. Until the result is necessary, this approach simply builds up a dependency graph, or parse tree, of the program and stores information about array sizes and dependencies. Once a result is required, the relevant portion of the dependency graph is extracted and analyzed.
More on R/Python Programming
Published in Nailong Zhang, A Tour of Data Science, 2020
R is a lazy programming language since it uses lazy evaluation by default [13]. Lazy evaluation strategy delays the evaluation of an expression until its value is needed. When an expression in R is evaluated, it follows an outermost reduction order in general. But Python is not a lazy programming language and the expression in Python follows an innermost reduction order. The code snippets below illustrate the innermost reduction vs. outermost reduction.
Reactive scheduling approach for solving a realistic flexible job shop scheduling problem
Published in International Journal of Production Research, 2021
B. Mihoubi, B. Bouzouia, M. Gaham
In this context, Hildebrandt and Branke (2015) introduce the concept of phenotypic characterisation to improve the convergence speed of GP, evolving dispatching rules in dynamic job shop scheduling (expensive fitness evaluations). The nearest neighbour surrogate model uses the full fitness evaluations of the last two generations to predict the fitness of a new individual. The proposed approach saved up to 80% of the fitness evaluation computation time compared to the standard GP model. Person, Grimm, and Ng (2008) propose a hybrid GA and Evolution Strategies (ES) optimisation based on a steady-state design using both DES and ANN. The proposed algorithm is tested successfully in optimising two real-world problems in the manufacturing domain. A comparison with a corresponding algorithm not using a surrogate model indicates that the use of surrogates may be very efficient in simulation-based optimisation of complex problems. In Nasiri, Yazdanparast, and Jolai (2017), the authors propose simulation-based real-time scheduling and dispatching rule for scheduling a non-preemptive open shop with stochastic ready times, minimising the mean waiting time of jobs. The Taguchi Design Of Experiments (DOE) is employed to define some experimental scenarios by the use of the validated DES model. Then, each experimental scenario results are extracted and used to train data for the ANN to find the optimum structure, estimating other possible scenarios. After that, the ANN optimal structure is used to estimate other possible scenarios' response variables. Two ANNs are used in experiments: BPNN and RBF neural networks. Data Envelopment Analysis (DEA) is used as an empirical measure of the decision-making unit. Results show that the response variables of the proposed composite dispatching rule are superior to the known dispatching rules of the literature. However, the proposed approach fails in collecting reliable and exact data to obtain acceptable results. Zhou, Yang, and Huang (2019) propose three types of hyper-heuristic methods to automatically design Scheduling Policies (SP) including job sequencing rules and machine assignment rules for solving Dynamic FJSSP: the cooperative coevolution GP with two sub-populations, GP with two sub-trees, and GEP with two sub-chromosomes. They use three types of fitness evaluation of evolved scheduling policies in different stages of the algorithm: lazy evaluation, full evaluation, and surrogate-assisted evaluation. The learning process of proposed methods demonstrates that the surrogate-assisted GP methods accelerate the evolutionary process and improve the quality of the evolved SPs without a significant increase in the length of SP. Besides, the evolved SPs generated by the surrogate assisted GP shows superior performance as compared with existing rules in the literature.