Explore chapters and articles related to this topic
Green Cloud Computing Based Future Communication Systems
Published in Gurjit Kaur, Akanksha Srivastava, Green Communication Technologies for Future Networks, 2023
Third level: In this level data center power consumption is raised on the placement of current power competence servers and processors. Minimum energy consumption of the processors can lower the energy utilization of the IT infrastructures to a large extent. Various methods are present for new energy management servers in the market from various dealers like Intel, Qualcomm, etc. where these are proving to be better operation per watt systems. These new server models have introduced new concepts like clock gating and power gating for lowering CPU clock run time and switching off the block that is idle, respectively. Also increasing energy conservation and computing speed can be attained by the utilization of multi-core processors. This also needs software that can operate on multi-core CPU conditions for a power-efficient multi-core system. This led to the usage of virtualization and consolidation technologies to minimize the energy needs of IT systems.
Automatic Code Parallelization and Optimization
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
Over the past decade, parallel processing has become increasingly prevalent. Desktop processors are manufactured with multiple cores (Intel), and commodity cluster systems have become commonplace. The IBM Cell Broadband Engine architecture contains eight processors for computation and one general-purpose processor (IBM). The trend toward multicore processors, or multiple processing elements on a single chip, is growing as more hardware companies, research laboratories, and government organizations are investing in multicore processor development. As an example, in February 2007 Intel announced a prototype for an 80-core architecture (Markoff 2007). The motivation for these emerging processor architectures is that data sizes that need to be processed in industry, academia, and government are steadily increasing (Simon 2006). Consequently, with increasing data sizes, throughput requirements for real-time processing are increasing at similar rates. As radars move from analog to wideband digital arrays and image processing systems move toward gigapixel cameras, the need to process more data at a faster rate becomes particularly vital for the high performance embedded computing community.
How to Untangle Complex Systems?
Published in Pier Luigi Gentili, Untangling Complex Systems, 2018
where M is the number of switches working with the clock frequency ν of the microprocessor (Cavin et al. 2012). The computational power of a CPU, measured in the number of instructions per second, is directly proportional to β. Therefore, it is evident that for larger computational power, it is important to increase not only M but also ν. Researchers have found that a silicon CPU can work at most at a frequency of 4 gigahertz without melting from excessive heat production. To overcome this hurdle, it is necessary to introduce either an effective cooling system or multi-core CPUs. A multi-core CPU is a single computing element with two or more processors, called “cores,” which work in parallel. The speed-up (Sp) of the calculations is described by the Amdahl’s law (Amdahl 1967): [] Sp=1(1−P)+PN
Study and evaluation of automatic offloading method in mixed offloading destination environment
Published in Cogent Engineering, 2022
Like GPUs, multi-core CPUs utilize many computational cores and parallelize processing to speed up. Unlike the GPU, the multi-core CPU has a common memory so there is no need to consider the overhead due to data transfer between the CPU and the GPU memory, which is often a problem with offloading to the GPU. In addition, the OpenMP specification is frequently used for parallelizing program processing on a multi-core CPU. OpenMP is a specification that specifies parallel processing and other processing for a program with directives such as #pragma omp parallel for. The OpenMP programmer is responsible for parallelizing the processing in OpenMP. When an attempt is made to parallelize the processing that cannot be parallelized, the compiler does not output an error and the calculation result becomes wrong.
GNSSer: objected-oriented and design pattern-based software for GNSS data parallel processing
Published in Journal of Spatial Science, 2021
Linyang Li, Zhiping Lu, Zhengsheng Chen, Yang Cui, Dashuang Sun, Yupu Wang, Yingcai Kuang, Fangchao Wang
Figure 9 shows the multi-core parallel realisation of GNSS data processing based on TPL, which is a higher-level abstraction for threads. First, based on the task parallel, a task can be an un-difference positioning, a double-difference positioning or an adjustment task. Once tasks are split, they are assigned to multi-physical cores, which are realised automatically by a nested recursive loop. Loading balance is achieved by task scheduling; the remaining tasks wait for execution in the queue, and multiple tasks are executed simultaneously under a multi-core platform. Then, based on the data parallel, parallel.for and parallel.foreach are brought in to achieve fast parallel execution of the independent loop iteration process – i.e. by substituting ‘parallel.for’ for ‘for’. An example of the workflow for the multi-core undifferenced parallel resolution is presented in Li et al. (2017a).
Hotspot Analysis of Double-Layer Microchannel Heat Sinks
Published in Heat Transfer Engineering, 2019
Thermal analysis of a heat sink is generally performed by assuming that a uniform heat flux is generated by microprocessors. However, in a real situation, the heat flux generated by microprocessors is nonuniform [33]. In a microprocessor, a significant amount of heat is generated by the clock as compared to the data path, memory, controller, or input/output [33]. A very small high heat flux region caused by the large amount of heat generated by the clock is known as a “hotspot.” Processor architecture is shifting toward a multicore because the maximum clock frequency of a single core processor has plateaued at around 4 GHz, due to the increase in design complexity and dynamic power dissipation [34]. In a multicore processor, cores dissipate a large amount of heat as compared to the other parts of the chip, creating multiple hotspots and large temperature gradients. Heat sinks are generally designed by considering the maximum temperature only, whereas a relatively disregarded parameter, the temperature uniformity, is equally important because large temperature gradients cause circuit imbalances in microprocessors [35]. This problem is more severe in multicore processors where most of the heat flux is concentrated at the cores.