Explore chapters and articles related to this topic
Quest for Energy Efficiency in Digital Signal Processing
Published in Tomasz Wojcicki, Krzysztof Iniewski, VLSI: Circuits for Emerging Applications, 2017
Ramakrishnan Venkatasubramanian
The end application for HPC is typically significant data crunching for scientific research or cloud-based data crunching on large data sets. Typically, HPC is used in oil and gas exploration, bioscience, big data mining, weather forecast, financial trading, electronic design automation, and defense. HPC systems need to be scalable, provide high computing power to meet the ever increasing processing needs with high energy efficiency for varied end applications. One of the major challenges faced in supercomputing industry is its efforts to hit exascale compute level by the end of the decade. In 1996, Intel’s Accelerated Strategic Computing Initiative (ASCI) Red was the first supercomputer built under the ASCI; the supercomputing initiative of the U.S. government to achieve 1 TFLOP performance [8]. In 2008, the IBM-built Roadrunner supercomputer for Los Alamos National Laboratory in New Mexico reached the computing milestone of 1 petaflop by processing more than 1.026 quadrillion calculations per second; it ranked number 1 in the TOP500 list in 2008 as the most powerful supercomputer [9]. Moving to 1000 times capacity to ExaFLOPs is very challenging—given the fact that power density of such ExaFLOP cannot expand 1000 times. Power delivery and distribution create significant challenges. Intel, IBM, and HP, for example, are continuing on the journey for core performance on multicore, many core, as well as graphical processing unit (GPU) accelerator model, and face significant power efficiency challenges.
High-Performance Computing for Advanced Smart Grid Applications
Published in Stuart Borlase, Smart Grids, 2018
Yousu Chen, Huang Zhenyu (Henry), Yousu Chen, Zhenyu (Henry) Huang
New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. High-performance computing (HPC) is considered one of the fundamental technologies in meeting the computational challenges in smart grid planning and operation. HPC involves the application of advanced algorithms, parallel programming, and computational hardware to drastically improve the capability of handling data analysis, modeling, and computation complexity in software applications.
Cutting Edge Data Analytical Tools
Published in Chong Ho Alex Yu, Data Mining and Exploration, 2022
SAS has many high performance procedures. It is good at both sequential processing and multi-thread processing. When multi-thread processing is used, the processor can handle high performance computing by partitioning and analyzing the data in multiple threads concurrently. Hewlett Packard Enterprise (HPE 2021), a key player in the field of high performance computing, demonstrated that SAS running on the HPE Superdome Flex 280 Server and HPE Primera Storage can deliver up to 20 GB/s of sustained throughput.
Design strategies and approximation methods for high-performance computing variability management
Published in Journal of Quality Technology, 2023
Yueyao Wang, Li Xu, Yili Hong, Rong Pan, Tyler Chang, Thomas Lux, Jon Bernard, Layne Watson, Kirk Cameron
The computing scale and complexity in modern technologies and scientific areas make high-performance computing (HPC) increasingly important. Performance variability, however, is an important challenge in the research of HPC systems, which has been observed for a long time (e.g., Giampapa et al. 2010; Akkan, Lang, and Liebrock 2012; Cameron et al. 2019). High variability in HPC systems can lead to unstable system performance and potentially high energy costs. Therefore, variability management is crucial for system performance optimization. The performance variability is affected by many complicated interactions of factors in the system. In this study, we focus on input/output (I/O) performance variability. The relationship between system configurations (e.g., CPU frequency, file size, record size, the number of I/O threads, and I/O operation modes) and the I/O performance variability is of interest.
Framework and modelling of inclusive manufacturing system
Published in International Journal of Computer Integrated Manufacturing, 2019
Sube Singh, Biswajit Mahanty, Manoj Kumar Tiwari
The HPC is considered as one of the enabling technologies for IMS because of complexity and size of datasets generated from consumers, enterprises, manufacturing operations and logistics systems. In manufacturing domain, the cloud computing was adopted to access geographically distributed resources for better utilisation of computing power. The big companies like Amazon, Rackspace and Microsoft provide platforms for developing and deploying the applications on cloud (Jackson et al. 2010). Later on, HPC has been introduced by using the concept of parallel processing to run advanced application programmes efficiently, reliably and quickly (Schmidberger 2012). HPC technology helps in optimising the problems very quickly, and it also supports in accessing the design and simulation software from a cloud (Wu et al. 2017).
Prediction of high-performance computing input/output variability and its application to optimization for system configurations
Published in Quality Engineering, 2021
Li Xu, Thomas Lux, Tyler Chang, Bo Li, Yili Hong, Layne Watson, Ali Butt, Danfeng Yao, Kirk Cameron
High performance computing (HPC) commonly refers to the aggregation of computing power to obtain much higher performance than a typical desktop computer or workstation. HPC is widely used to solve large-scale problems in various areas such as science, engineering, and business. While improving the performance of HPC systems attracts lots of research, managing the performance variability of HPC systems is also an important dimension of HPC system management that can not be ignored. A common manifestation of performance variability is the variation from run to run in the execution time for a particular task.