Explore chapters and articles related to this topic
High-Level Synthesis
Published in Luciano Lavagno, Igor L. Markov, Grant Martin, Louis K. Scheffer, Electronic Design Automation for IC System Design, Verification, and Testing, 2017
Felice Balarin, Alex Kondratyev, Yosinori Watanabe
The second type of decision made by HLS tools is to define hardware resources used for implementing the behavior. We often refer to this type of design decision as resource allocation. Resources are of two kinds: computational resources and data storage resources. Computational resources implement operations given in the input behavior, such as additions, multiplications, comparisons, or compositions of such. Data storage resources determine how data should be retained. Computational resources access these resources for retrieving inputs of the computation or writing the outputs. Data storage resources could be simply wires, or ports if the data are at the inputs or outputs of the components, or registers if the data need to remain intact across multiple states of the FSMs, or some sort of memories that keep data and provide a particular way to access them. We will describe how the decisions are made to choose particular kinds of data storage resources in HLS tools in the succeeding sections, again starting from the simplest case.
L
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
load balancing the process of trying to distribute work evenly among multiple computational resources. load break device any switch, such as a circuit breaker or sectionalize capable of disconnecting a power line under load. load buffer a buffer that temporarily holds memory-load (i.e., memory-write) requests. load bypass a read (or load) request that bypasses a previously issued write (store) request. Read requests stall a processor, whereas writes do not. Therefore high-performance architectures permit load bypass. Typically implemented using write-buffers. load center the geographic point within a load area, used in system calculations, at which the entire load could be concentrated without affecting the performance of the power system. load flow study See power flow study.
Introduction to Distributed Real-Time Mixed-Criticality Systems
Published in Hamidreza Ahmadian, Roman Obermaisser, Jon Perez, Distributed Real-Time Architecture for Mixed-Criticality Systems, 2018
This book presents an in-depth explanation of the virtualization technologies at the three integration levels including hypervisors, networks-on-a-chip, off-chip networks and memories. The management and virtualization of the cores are the purpose of operating systems and hypervisors. A hypervisor establishes partitions, which serve as protected execution environments for the execution of functions. Hypervisors virtualize the computational resources and permit the coexistence of different guest operating systems. In addition, multi-core hypervisors enable the virtualization of the cores and allow application functions to abstract from the actual processing hardware. Thereby, hypervisors also decouple the number of software functions from the number of cores.
Region-based approximation in approximate dynamic programming
Published in International Journal of Control, 2022
Tohid Sardarmehni, Xingyong Song
The endeavours in choosing the best function approximators can be divided into two different groups. The first group includes the works that use non-parametric function approximators such as kernels, and generalised radial basis function kernels (Rosenfeld et al., 2019). The second group uses parametric function approximators such as linear regression models (Heydari, 2018), nonlinear regression models (Radac & Precup, 2018; Sardarmehni & Heydari, 2019), and deep learning models (Buşoniu et al., 2018; Kim et al., 2018; Zeng et al., 2019). In non-parametric function approximation, the structure of the function approximators is automatically updated by the training algorithms, which alleviates the need for trial & error. The size of non-parametric models increases by the increase in the size of the training patterns. Also, the training process can be time-consuming for a large set of training samples. On the other hand, in parametric function approximation the user chooses the number of parameters to be adjusted through training. As a right-hand rule in parametric function approximation, a better approximation precision can be achieved by nonlinear regression or deep learning models. However, the training process with these sophisticated models is a time-consuming process that mostly needs a considerable amount of computational resources.
Application of ANN-PSO algorithm based on FDM numerical modelling for back analysis of EPB TBM tunneling parameters
Published in European Journal of Environmental and Civil Engineering, 2022
Leila Nikakhtar, Shokrollah Zare, Hossein Mirzaei Nasirabad, Behnam Ferdosi
Mechanized tunneling is a complicated construction process that involves the interaction between tunnel and surrounding environment, perform the support measures at the tunnel face and the tail void, etc. So, for safe construction and prevention of any failure, a valid forecast of the tunneling effects combined with timely control of this process is required. Numerical simulation can be applied as a reliable tool to predict the effects of the tunneling process at both design and construction stages. The use of numerical simulation models in tunneling has increased since the early 1980s with the developing of modern computing technology and significant advances in computational structural mechanics (Chakeri et al., 2011; Chakeri & Ünver, 2014; Cheng et al., 2019; Do, 2014; Ercelebi et al., 2011; Kasper & Meschke, 2004; Lai et al., 2020; Su et al., 2014; Yang & Li, 2012). However, these models require a large amount of computational resources.
Scanning the Issue
Published in IETE Journal of Research, 2022
The paper, entitled “An Optimal Time-Based Resource Allocation for Biomedical Workflow Applications in Cloud,” addresses the problem of large volume of data for biomedical workflow applications and their scheduling through cloud computing. It presents scheduling algorithms for efficient utilization of computational resources. The performance of the algorithms is tested on real world biomedical workflow applications and compared with existing methods to make inferences.