Explore chapters and articles related to this topic
Design Closure
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC Implementation, Circuit Design, and Process Technology, 2018
In addition to these logical transformations, power/performance trade-offs can be made by using frequency scaling or voltage scaling. In frequency scaling, portions of the design that can run more slowly are segregated into more power-efficient, lower-frequency clock domains while more performance-critical logic is assigned to higher-frequency, and therefore higher power, domains. Voltage islands allow a similar power/performance trade-off by assigning less performance-critical logic to a lower voltage “island.” Using a lower supply voltage saves both active and static power at the cost of additional delay. Voltage islands also require the addition of level shifting logic which must be added to allow logic level translation between circuits running at different voltages. The granularity of voltage islands need to be chosen carefully to ensure that the benefits of their implementation outweigh their performance, area, and power overhead.
The Integrated Circuit Design Process and Electronic Design Automation
Published in Louis Scheffer, Luciano Lavagno, Grant Martin, EDA for IC System Design, Verification, and Testing, 2018
Robert Damiano, Raul Camposano
A design can have part of its logic clock-gated by using logic to enable the bank of registers. The logic driven by the registers is quiescent until the clock-gated logic enables the registers. Latches at the input can isolate parts of a design that implement operations (e.g. an arithmetic logic unit (ALU)), when results are unnecessary for correct functionality, thus preventing unnecessary switching. Voltage-islands help resolve the timing vs. power conflicts. If part of a design is timing critical, a higher voltage can reduce the delay. By partitioning the design into voltage-islands, one can use lower voltage in all but the most timing-critical parts of the design. An interesting further development is dynamic voltage/frequency scaling, which consists of scaling the supply voltage and the speed during operation to save power or increase performance temporarily.
The Evolving and Expanding Synergy Between Moore’s Law and the Internet-of-Things
Published in Lambrechts Wynand, Sinha Saurabh, Abdallah Jassem, Prinsloo Jaco, Extending Moore’s Law through Advanced Semiconductor Design and Processing Techniques, 2018
Lambrechts Wynand, Sinha Saurabh
The most effective method to reduce energy consumption of a task or instruction in a microprocessor core is to lower the operating voltage, although there are several limitations to this approach. To improve efficiency, and therefore reduce energy leakage in a processing core, clock frequency scaling should be considered, since increasing the clock frequency reduces the time to complete a task, and vice versa, depending on the instantaneous requirements of the circuit or system (Pinckney et al. 2012). IoT processing cores can be equipped with dynamic voltage and frequency scaling (DVFS) capabilities to enhance their efficiency and dynamically scale their performance based on immediate requirements (Henkel et al. 2017). For a core to support a specific operating frequency and DVFS, its supply voltage must be adjusted above a predetermined minimum value. A higher frequency requirement translates to a higher minimum voltage; therefore, at a constant operating voltage, a core should be operated below a specified maximum frequency. This relationship occurs since the amount of energy stored in the tank circuit of an oscillator, Estored, typically a combination of inductance, resistance and capacitance, is given by
Efficient resource management techniques in cloud computing environment: a review and discussion
Published in International Journal of Computers and Applications, 2019
Frederic Nzanywayingoma, Yang Yang
According to [39], Dynamic Component Deactivation (DCD), Dynamic Performance Scaling (DPS), and Dynamic Voltage and Frequency Scaling (DVFS) are ones of the dynamic power management techniques suggested. DPS technique is for the automatic adjustment of the performance proportional to the power consumption. DVFS and DCD are applied to different computer component and at OS level such as CPU, Memory, disk, network interface, other power-aware OS such as KVM, VMware solution, Xen Hypervisor. The application of DVFS technique decreases power consumption of a computing resource significantly. This technique was firstly used in portable and laptop systems to conserve battery power, and now it has been implemented on the server chipsets. Lowering the CPU frequency may lead to power savings and potential energy savings but may also impact application performance. Therefore, to maximize the energy efficiency while meeting the SLA constraints, scheduling algorithms are involved to determine a good operating frequency of the CPU to meet the application deadlines. The scheduling algorithms have to consider both cost and energy factors on the decision-making [40].
Data analytics for energy-efficient clouds: design, implementation and evaluation
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Albino Altomare, Eugenio Cesario, Andrea Vinci
In [17] an approach for power-efficient resource management of Web servers, satisfying a fixed SLA (response time) and load balancing, is provided. In detail, authors propose two power saving techniques: (i) switching power of computing nodes on/off and (ii) Dynamic Voltage and Frequency Scaling (DVFS). The main idea of the policy is to estimate the total CPU frequency required to provide the necessary response time, determine the optimal number of physical nodes and set the proportional frequency to all the nodes. A similar technique has been proposed in [18], where the load balancing is handled by an external system, which is driven by a centralised algorithm.