Explore chapters and articles related to this topic
Introduction to computer architecture
Published in Joseph D. Dumas, Computer Architecture, 2016
Energy efficiency has become such an important consideration in modern computing that twice per year (since November 2007), a “Green500” list of the most efficient large computing systems has been published and released. As in the case of the biannual Top 500 list, each system is evaluated on LINPACK performance, but in this case, that value is divided (as we did above) by the system’s power consumption. It is worth noting that the most powerful systems overall are not necessarily the most efficient and vice versa. Titan (#2 in the Top 500) is only number 63 on Green500; China’s Tianhe-2 (#1 in total computational power) is even further down the Green500 list at number 90. The “greenest” machine on the November 2015 list, the Shoubu ExaScaler supercomputer at Japan’s Institute of Physical and Chemical Research, had a rating of more than 7 GFLOPS/W (although it was only number 136 on the Top 500 list).
A quantitative evaluation and analysis on CPU and MIC
Published in Amir Hussain, Mirjana Ivanovic, Electronics, Communications and Networks IV, 2015
Wenzhu Wang, Qingbo Wu, Yusong Tan
High Performance Computing (HPC) is a promising area for both academy and industry. In recent years, we have witnessed a rapid development in this field, especially the wide use of coprocessors, such as Graphics Processing Unit (GPU), FPGA, and coupled CPU-GPU. Many Integrated Core (MIC) is a new kind of coprocessor architecture which puts a large number of simple cores together in a single chip. Generally speaking, it has more cores, wider Vector Process Units (VPUs), and higher memory bandwidth than ordinary CPUs. Moreover, the programming models of MIC are compatible with x86 architecture. Because of the remarkable performance and friendly programming models, Tianhe-2 super- computing system, the fastest supercomputer by now in the world (top500 2014), uses thousands of MIC coprocessors to accelerate parallel computing.
AI in Education
Published in Prathamesh Churi, Shubham Joshi, Mohamed Elhoseny, Amina Omrane, Artificial Intelligence in Higher Education, 2023
Fujitsu has built the K computer, which is one of the fastest supercomputers in the world. It is one of the significant attempts at achieving strong AI. It took nearly 40 minutes to simulate a single second of neural activity. Hence, it is difficult to determine whether strong AI will be achieved shortly. Additionally, Tianhe-2 is a supercomputer that was developed by China’s National University of Defense Technology. It holds the record for cps (calculations per second) at 33.86 petaflops (quadrillions of cps).
Study on equivalent fatigue damage of two in-a-line wind turbines under yaw-based optimum control
Published in International Journal of Green Energy, 2023
To simulate the dynamic wake flow, PISO solver in OpenFOAM is employed with 0.05-second time step. The tolerances for the residuals of pressure and velocity are and respectively. The cell number is 12 million. The mesh for the simulation case has been shown in Figure 4. To finish the parallel computation, 64 cores (2 nodes) in Tianhe-2 supercomputer was employed. The constant wind speed at inlet boundary is set to be 10 m/s, and the turbulence intensity is 0.01, which is close to the atmospheric condition of offshore wind farm. 10 m/s is close to the rated wind speed of NREL 5 MW wind turbine (rated wind speed is 11.4 m/s). Under this condition, the pitching control has not been involved (pitching angle is 0 degree) and the wake effect is obvious. As a result, the inflow velocity 10 m/s is very representative. The simulated flow field (yaw angle equals ) is shown in Figure 4.
Approximation algorithms in partitioning real-time tasks with replications
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
Jian (Denny) Lin, Albert M. K. Cheng, Gokhan Gercek
The employing of multiple processing units has become a standard framework that is widely adopted in modern computing environment to accommodate the increasing computational demand of computing intensive applications. From smart phones/PCs/Laptops to servers and supercomputers, multiprocessor systems lead us entering into an era where they play a major role. While today the Intel Xeon family provides up to 18 cores in its products, one of the fastest supercomputers Tianhe-2 runs 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessor chips. These systems greatly improve performance and cost-efficiency of computation in different fields. At the same time, they draw research attentions to solve a variety of specific problems of using them. In this paper, we generalise a system with multiple processing units, including a multi-core system or a system running multiple processors, as a multiprocessor system.
Modeling of Transport Processes in Liquid-Metal Fusion Blankets: Past, Present, and Future
Published in Fusion Science and Technology, 2023
As shown in Fig. 9, four MHD flow cases were simulated, including case 1 for electrically conducting walls, case 4 for nonconducting walls, and cases 2 and 3 with partial electrical insulation of the blanket conduits using SiC FCIs placed at selected locations. In the MHD computations in part 1, the computational mesh consisted of ~320 × 106 cells to accurately capture all flow features. The MHD flow/heat transfer computations with volumetric heating in the PbLi breeder were conducted in part 2 for the conducting blanket case 1 and the nonconducting case 4, either as loose coupling (forced convection cases 1fc and 4fc) or tight coupling (mixed convection cases 1mc and 4mc). Compared to part 1, a finer mesh of ∼470 × 106 cells was used in the computations in part 2, as more turbulent features were expected in the mixed-convection flows. Forced-convection flows were computed first assuming no effect of the temperature field on the velocity, and then the fully coupled mixed-convection cases were analyzed. The large mesh size and small time step in the computations require a tremendous number of computer operations. For acceleration, massive parallelization was used based on a hybrid approach that includes message passing interface (distributed memory) and OpenMP (shared memory) programming interfaces. The computations were conducted on the TIANHE-2 supercomputer at Guangzhou City, China, using 1200 computational cores for purely MHD flows and 2400 cores in the computations of MHD flows with volumetric heating. In the latter cases, the total computational time for one run was ∼1.56 × 106 CPU hours. The corresponding wall clock time is ∼8 weeks.