Explore chapters and articles related to this topic
Quest for Energy Efficiency in Digital Signal Processing
Published in Tomasz Wojcicki, Krzysztof Iniewski, VLSI: Circuits for Emerging Applications, 2017
Ramakrishnan Venkatasubramanian
The end application for HPC is typically significant data crunching for scientific research or cloud-based data crunching on large data sets. Typically, HPC is used in oil and gas exploration, bioscience, big data mining, weather forecast, financial trading, electronic design automation, and defense. HPC systems need to be scalable, provide high computing power to meet the ever increasing processing needs with high energy efficiency for varied end applications. One of the major challenges faced in supercomputing industry is its efforts to hit exascale compute level by the end of the decade. In 1996, Intel’s Accelerated Strategic Computing Initiative (ASCI) Red was the first supercomputer built under the ASCI; the supercomputing initiative of the U.S. government to achieve 1 TFLOP performance [8]. In 2008, the IBM-built Roadrunner supercomputer for Los Alamos National Laboratory in New Mexico reached the computing milestone of 1 petaflop by processing more than 1.026 quadrillion calculations per second; it ranked number 1 in the TOP500 list in 2008 as the most powerful supercomputer [9]. Moving to 1000 times capacity to ExaFLOPs is very challenging—given the fact that power density of such ExaFLOP cannot expand 1000 times. Power delivery and distribution create significant challenges. Intel, IBM, and HP, for example, are continuing on the journey for core performance on multicore, many core, as well as graphical processing unit (GPU) accelerator model, and face significant power efficiency challenges.
The Los Alamos Computing Facility During the Manhattan Project
Published in Nuclear Technology, 2021
At Los Alamos, the wartime computing effort set the pattern for decades to come, and computing has always been a core part of the Los Alamos weapons program. Just as the Los Alamos triple multiple and divide PCAMs were among the most advanced IBM machines during WWII, the ever-increasing weapons modeling requirements have driven Los Alamos to stay on the forefront of computing. The thermonuclear weapons work in the 1950s drove Los Alamos’s use of the first-generation computers, such as ENIAC, MANIAC, and the IBM SSEC. Since then, examples include the IBM Stretch 7030, CDC 6600 and 7600, the first Cray I, and today’s massively parallel clusters, such as the IBM Roadrunner machine that first achieved one-petaflop performance. The PCAMs were used to calculate neutron diffusion, bombing tables, equation of state, and hydrodynamics. This was probably the broadest problem set of any of the PCAM installations during WWII. The types of problems Los Alamos needed to solve continued to expand after the war, leading to new physics methods and codes. Examples include developing the Monte Carlo method, the SN neutronics method, and hydrodynamic methods such as particle in cell and arbitrary Lagrange-Eulerian.