Explore chapters and articles related to this topic
Force-System Resultants and Equilibrium
Published in Richard C. Dorf, The Engineering Handbook, 2018
Two fundamental characteristics of memory systems are speed and cost. Speed reflects how long it takes the device to read or to write information. The access time or read latency of a device is the time required for it to respond to a read request. The throughput or bandwidth of the device is the rate at which it transfers information. The relation between these terms can be illustrated by analogy with a photocopy machine making multiple copies of a single page. The time required for the first copy to be output is the latency, which includes the time required to scan the page and for internal processing. Subsequent copies of the page are output faster by the machine, because it can avoid the initial scan and can overlap processing steps required for successive copies. The throughput reflects the rate at which successive copies are produced. Device speeds vary greatly. Solid-state SRAMs and DRAMs have latencies of a few to tens of nanoseconds, respectively, while mechanically driven devices like magnetic tapes can have latencies on the order of seconds. A magnetic disk has a latency of tens of milliseconds, but can transfer data at the rate of several million bits per second. In a computer system, the term memory usually refers to fast devices that are directly accessed by the processor, while the term storage refers to slower devices further from the processor that are accessed using specialized hardware controllers under the supervision of operating system software.
Soft-Error Mitigation Approaches for High-Performance Processor Memories
Published in Tomasz Wojcicki, Krzysztof Iniewski, VLSI: Circuits for Emerging Applications, 2017
Caches are small, fast SRAM memories that allow any address to be mapped to them, enabling the cache to simulate the full memory, but with greatly improved access time. Cache memory is actually composed of two logically distinct (and usually physically distinct) memory arrays—the tag and data arrays as shown in Figure 6.1. Each entry in the tag array holds a portion (the tag) of the memory address, so that the location of the data held in the cache can be found by comparing the tag to the memory address requested. Each tag entry has a corresponding block or line of associated data that is a copy of the memory residing in the corresponding address location. The block or line size generally ranges from four to sixteen words of memory. To access the cache, the tag memory is accessed and if the value stored in memory matches the address, a cache hit is signaled and the associated cache data array value is returned. Otherwise, a cache miss is asserted. In this case, the needed data is fetched from the main memory or the next hierarchical cache level.
Introduction
Published in Abdullah Al Mamun, GuoXiao Guo, Chao Bi, Hard Disk Drive, 2017
Abdullah Al Mamun, GuoXiao Guo, Chao Bi
The time required to move the head to a new track position and get it ready for reading or writing is called access time. It is the sum of the time required to find the new track (seek time), time required to settle on it (settling time), and latency. Latency is defined as half of the time the disk takes to make one rotation as, on the average, the desired data is located 180° from the position where the head settles onto the track. One-third stroke seek times are around 3 milliseconds on high performance drives making spindle latency the most significant contributor to the access time. Low access time is very important in computer applications because the number of data transfers is so high that a small increase in the time required for each transfer causes considerable overall delays in processing data or running programs.
Design of an enhanced write stability, high-performance, low power 11T SRAM cell
Published in International Journal of Electronics, 2021
The read delay is estimated as the difference between the instant when WL is activated to the instant when Precharged RBL drops to 50% of VDD (Naghizadeh & Gholami, 2019). Lower read access time indicates a higher speed in accessing the data. From Figure 13, results show that 6 T, 8TG, 10TDiff cells have lower read delay due to its differential read operation. P11T SRAM cell has reduced delay of 82.8% compared to 11 T SRAM cell. 6 T, 8TG and 10T Diff SRAM cells employ differential read operation. Due to increased bit line capacitance in 8TG SRAM cell, read delay is slightly higher than 6 T as it takes longer to time discharge. 10T Diff SRAM cell has slightly higher delay than 6 T due to minimum sized devices in the read port. DDS11T, 11 T and P11T SRAM employ single-ended read operation. P11T SRAM cell has a similar structure of read port compared to DDS11T SRAM cell but due usage of High-Vth devices increased the read delay. 11 T SRAM cell reports the highest delay. Read power is the power consumed during the read access time. Due to the usage of high-Vth transistors, read power in P11T SRAM cell has been decreased by 58.2%, 0.82%, 60.36% and 451% than the 6 T, 8TG, DDS11T, 11 T SRAM cells respectively at TT Corner at 0.9 V supply voltage.