Explore chapters and articles related to this topic
The Evolution of Computer Architecture
Published in S.B. Furber, VLSI Risc Architecture and Organization, 2017
written back to main memory before the location may be re-used. Whatever the write strategy, a write buffer may be used to allow the processor to continue without having to wait for a full main memory access. The data is written to the buffer, which subsequently passes it on to main memory, but in the meantime the processor continues with the following operations. The buffer could handle just one write operation at a time, in which case the processor will stall if a second write occurs before the first has completed, or it could contain a queue of write operations which will not cause a stall unless the queue becomes full. Caches are extensively used in high performance computers, and are now increasingly being used in 32-bit microprocessor systems, including those based on RISC architectures. RISC processors tend to require very high instruction bandwidths, so cache design is important for good performance.
Shared Memory Architecture
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
An advantage of this approach is that other devices such as input/output (IO) modules that have direct access to the main memory always get the newest value of a memory block. A disadvantage of write-through is that every write in the cache also causes a write to the main memory, which typically takes at least 100 processor cycles to complete. This could slow down the processor if it had to wait for completion. To avoid processor waiting, a write buffer can be used to store pending write operations into the main memory. After writing the data into the cache and into the write buffer, the processor can continue its execution without waiting for the completion of the write into the main memory.
Write energy reduction of STT-MRAM based multi-core cache hierarchies
Published in International Journal of Electronics Letters, 2019
The STT-MRAM cache memory design follows the same basic design practice as SRAM cache, which consists of a subarray, Word Line, Bit Line, sense amplifier, and H-Tree routing, etc., except each memory cell is replaced by 1T-1MTJ cell rather than 6T SRAM cell. Hence, STT-MRAM-based cache work similarly as its SRAM counterpart, the drawback of STT-MRAM-based cache is its long latency and energy consumption during the write operations. In the conventional cache hierarchy with ‘write-through policy’ all the read and write operations ingress the write buffer and L1 data cache in parallel. Write buffer ‘(Chu & Gottipati, 1994; Skadron & Clark, 1997)’ has a significant role in cache hierarchy as it reduces the write traffic to the L2 cache makes L1 data cache faster. It reduces the traffic to the next level of cache by aggregating the writes to the same cache blocks. The second advantage of using write buffer is, it absorbs writes at a rate faster than next level of cache.