Explore chapters and articles related to this topic
M
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
memory address register (MAR) a register inside the CPU that holds the address of the memory location being accessed while the access is taking place. memory alignment matching data to the physical characteristics of the computer memory. Computer memory is generally addressed in bytes, while memories handle data in units of 4, 8, or 16 bytes. If the "memory width" is 64 bits, then reading or writing an 8 byte (64 bit) quantity is more efficient if data words are aligned to the 64 bit words of the physical memory. Data that is not aligned may require more memory accesses and more-or-less complex masking and shifting, all of which slow the operations. Some computers insist that operands be properly aligned, often raising an exception or interrupt on unaligned addresses. Others allow unaligned data, but at the cost of lower performance. memory allocation the act of reserving memory for a particular process. memory bandwidth the maximum amount of data per unit time that can be transferred between a processor and memory. memory bank a subdivision of memory that can be accessed independently of (and often in parallel with) other memory banks. memory bank conflict conflict when multiple memory accesses are issued to the same memory bank, leading to additional buffer delay for such accesses that reach the memory bank while it is busy serving a previous access. See also interleaved memory. memory block contiguous unit of data that is transferred between two adjacent levels of a memory hierarchy. The size of a block will vary according to the distance from the CPU, increasing as levels get farther from the CPU, in order to make transfers efficient.
Optimization strategies for GPUs: an overview of architectural approaches
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
Alessio Masola, Nicola Capodieci
As far as memory interference in CPU systems is concerned, traditional solutions rely on memory bank [53] and/or cache [54] partitioning approaches. Cache partitioning is a well-known1 used run-time LLC management approach in multicore processors to enhance performance and predictability of memory access latencies: it refers to assign cache or bank partitions to CPU cores by giving exclusive access to the cache ways; therefore, by doing so, interference in shared memory hierarchies caused by simultaneous requests among the memory clients is significantly mitigated.
Cache performance of NV-STT-MRAM with scale effect and comparison with SRAM
Published in International Journal of Electronics, 2022
Zitong Zhang, Wenjie Wang, Pingping Yu, Yanfeng Jiang
Above all, the STT-MRAM memory bank shows better performances at large capacity than the cache based on the SRAM. In addition, the write delay and leakage power of the STT-MRAM large-capacity (> 32 MB) cache can also be optimised through a reasonable configuration scheme (in Section II).