Explore chapters and articles related to this topic
Advances in Computing Infrastructure
Published in Siyong Kim, John Wong, Advanced and Emerging Technologies in Radiation Oncology Physics, 2018
Yulong Yan, Alicia Yingling, Steve Jiang
Processors, also known as central processing units (CPUs), handle all the basic system instructions in a computer. Over the decades, they have evolved into microprocessors to perform the basic operations of fetching data, conduct operations from memory into chip registers, operate using the arithmetic logic unit (ALU), and store the results into registers. Other processors found in physical machines offer specialized functionality, such as controlling data mappings in storage controllers for various RAID implementations, or GPUs, which provide high-speed calculations across many parallel cores. CPUs now offer multiple cores in a single chip and are functionally independent. In processors, high-speed memory is expensive and used only during the calculation and result display stages. Then the results are returned to the main system memory, which is slower than the chip memory by several magnitudes.
A Case Study: Vision Control
Published in Ivan Cibrario Bertolotti, Gabriele Manduchi, Real-Time Embedded Systems, 2017
Ivan Cibrario Bertolotti, Gabriele Manduchi
In order to speed memory accesses computers use memory caches. A memory cache is basically a fast memory that is much faster that the RAM memory used by the processor, and which holds data recently accessed by the computer. The memory cache does not correspond to any fixed address in the addressing space of the processor, and therefore contains only copies for memory locations stored in the RAM. The caching mechanism is based on a common fact in programs: locality in memory access. Informally stated, memory access locality expresses the fact that if a processor makes a memory access, say, at address K, the next access in memory is likely to occur at an address that is close to K. To convince ourselves of this fact, consider the two main categories of memory data access in a program execution: fetching program instructions and accessing program data. Fetching memory instructions (recall that a processor has to read the instruction from memory in order to execute it) is clearly sequential in most cases. The only exception is for the Jump instructions, which, however, represent a small fraction of the program instructions. Data is mostly accessed in memory when the program accesses array elements, and arrays are normally (albeit not always) accessed in loops using some sort of sequential indexing.
System-level Packaging Technology
Published in Yufeng Jin, Zhiping Wang, Jing Chen, Introduction to Microsystem Packaging Technology, 2017
Yufeng Jin, Zhiping Wang, Jing Chen
System on chip, usually referred to as SOC or SoC, is a concept that emerged in the 1990s, and whose definition has continuously been enriched with time and technical advancements. With 65 nm 12 inch wafer manufacturing foundries available, hundreds of million transistors have been integrated in one chip. Currently SOC can be defined as the monolithic integration of a whole set of systems, including basic circuit units, such as one or more processors, memories, analog circuitry modules, mixed analog/digital circuit modules, programmable logic units, etc. A schematic of the concept of SOC integration is shown in Figure 8.1. If pertinent design and process issues can be solved, SOC techniques can provide the availability of system products with the highest integration and weight. Meanwhile, packaging of this chip or microsystem only requires the provision of conventional packaging functions such as signal transmission, power supply, and cooling. Thus the implementation of these kinds of SOC products has been a target pursued by various equipment system and semiconductor vendors.
Experimental investigation of rectangular mini channel array as an effective tool for energy efficient cooling of electronic gadgets
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2023
Jitendra D Patil, B S Gawali, Umesh Awasarmol, Girish Kapse, Shivam R Patil
Thermal design power (TDP) of microprocessor is the maximum theoretical amount of power in Watts that a processor may consume and therefore dissipate as heat. TDP is a very important from the point of view of cooling requirements of processor to secure a desired life and reliability. The value of TDP mentioned in the specification of the processor is always absolute maximum. Each model of microprocessor will have a thermal profile that gives a specific thermal resistance in K/W in terms of Tj which is maximum permitted temperature at the interface between die and heat spreader to achieve the specified maximum output, Tamb, which is maximum permitted ambient temperature to achieve the specified output and TDP which is maximum thermal design power rating of the microprocessor chip. Therefore, maximum achievable thermal resistance in K/W is estimated by taking the ratio of temperature difference of (Tj-Tamb) in K to TDP in Watt. According to International Technology Roadmap for Semiconductors (ITRS) projections for long-term projected values for the years 2003–2016 for both cost performance and high performance segments and different important parameters are given as follows (Gurrum et al. 2004) (ITRS is a set of documents produced by a group of semiconductor industry experts, from 2016 onward ITRS was renamed as International Roadmap for Devices and Systems, IRDS): Cost performance segment (desktop personal computer):
Exploration for Software Mitigation to Spectre Attacks of Poisoning Indirect Branches
Published in IETE Technical Review, 2018
Baozi Chen, Qingbo Wu, Yusong Tan, Liu Yang, Peng Zou
Modern processors use cache to fill up the speed gap in memory hierarchy. At the same time, it introduces uncertainty to the system that time of memory accesses varies depending on whether the data are in cache or not. Cache timing attacks are a specific type of side-channel attack that exploit the effects of the cache memory on the execution time of algorithms. The attacker can determine the addresses allocated into cache by measuring the time taken to access entries and leak information. There are several techniques to exploit cache that have been demonstrated already. Prime+Probe [15–17] is the one that the attacker fills one or more cache lines with its own contents, waits for the victim to execute and then probes by timing accesses to preloaded cache lines. If the attacker observes remarkable increased memory access latency, it means that the cache lines have been evicted by the victim who has touched an address that maps to the same set. Flush+Reload [18] is contrast to Prime+Probe. The attacker first flushes targeted cache lines, waits for the victim to execute and then reloads the flushed cache line by touching it, in the meanwhile measuring the time taken. If the attacker observes a fast memory access, it means that the cache lines have been reloaded by the victim. Evict+Time [19] compares the overall execution time of the victim after evicting some cache lines of interest with a baseline. The variation of overall execution time is then used to deduce whether the lines of interested have been accessed by the victim.