Explore chapters and articles related to this topic
Applications of Switch/Router
Published in James Aweya, Designing Switch/Routers, 2023
The main barrier to performance improvements in current day microprocessors is on-chip power consumption. The rise in on-chip power consumption using current semiconductor technology is preventing major increases in clock speed. As a result, performance improvements in newer microprocessors come primarily from increased aggregate performance of multiple-cores per chip, coupled with only modest increases in clock rate.
Chip, body, earth
Published in Fiona Allon, Ruth Barcan, Karma Eddison-Cogan, The Temporalities of Waste, 2020
First there is the time of the processor itself. In the semiconductor industry, speed becomes paramount—the ability to accomplish more in less time. Speed is usually indicated by clock rate, the frequency at which the chip is running. This value informs, but doesn’t directly determine, the number of calculations that can be executed per second. For example, the Intel 4040 chip, introduced in 1974, was one of the earliest models manufactured. It features a clock rate at a nominal 500 khz, or cycling around 500,000 times per second. However, alongside this frequency it is the microarchitecture of the chip—the ways in which instructions are grouped, parsed and processed—that ultimately establishes the amount of work accomplished across this second. For instance, after every clock cycle, the “signal lines” within the chip must return to their previous state, that is, “every signal line must finish transitioning from 0 to 1, or from 1 to 0” (Wiza 2019). These kinds of limiting factors mean that a chip such as the 4040 ends up processing around 92,000 instructions per second (Shvets 2017). Of course, this particular speed limit was quickly surpassed, swept away by newer versions released in rapid succession. Over the decades, one witnesses a shift from clock speed given in kilohertz (kHz) to megahertz (MHz) to today’s standard of gigahertz (GHz). Intel’s seventh-generation “Kaby Lake” processors on both desktop and mobile, for example, are all running around the 3 GHz range, or three billion cycles per second (Intel 2017b).
Processor Basics
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
The performance of a computer system can be increased by increasing the clock rate, that is, by reducing the time of every clock cycle and, hence, the execution time. In the previous few decades, the clock rate increased at an exponential rate but it hit the power wall because the power increases cubically with the clock rate (see Chapter 2: Section 2.3.2).
Massively parallel mesh adaptation and linear system solution for multiphase flows
Published in International Journal of Computational Fluid Dynamics, 2016
Luisa Silva, Thierry Coupez, Hugues Digonnet
In the last years, processor's performance has not increased by improving the clock rate but by multiplying the number of cores in a CPU. Actual top supercomputers contain several hundreds of thousands to millions of cores with hundreds of TB to PB of memory. It is thus necessary to develop fully parallel applications that follow, at least, this multicore CPU evolution. In what concerns our applications, we are interested in performing finite element flow simulations on mesh supports that very well represent image-based configurations (Silva et al., 2014), in particular issued from three-dimensional (3D) X-ray tomography, with several millions of voxels.