Explore chapters and articles related to this topic
Dynamic Random Access Memory (DRAM)
Published in Shimeng Yu, Semiconductor Memory Devices and Circuits, 2022
Embedded DRAM (eDRAM) is a variant of DRAM that is integrated on the same chip of the logic process primarily for the last-level cache with ultra-large capacity (100 MB–1 GB). Unlike standalone DRAM that uses separate fabrication processes, eDRAM is fully compatible to the logic process. eDRAM is positioned somewhere between SRAM and standalone DRAM in terms of the cell area (or the capacity) and the access speed. As aforementioned, SRAM has a typical cell area ranging from 150 F2 to 300 F2, and standalone DRAM has a typical cell area of 6 F2, while eDRAM has a typical cell area ranging from 30 F2 to 90 F2. SRAM can be accessed within 1 ns, standalone DRAM can be accessed within several tens of ns (row cycle time), while eDRAM can be accessed within a few ns. However, a notable drawback of eDRAM is its reduced retention to 100 μs due to increased leakage current of cell access transistor in the logic process.
Dynamic Intrinsic Chip ID for Hardware Security
Published in Tomasz Wojcicki, Krzysztof Iniewski, VLSI: Circuits for Emerging Applications, 2017
Toshiaki Kirihata, Sami Rosenblatt
eDRAM employs a one-transistor and one capacitor (1T1C) as a memory cell that stores a data bit for a read and write operation. To reduce the cell size, the capacitor is built using either stack [42] or trench capacitor [43] structures. A deep trench capacitor approach is the preferred structure for eDRAM because the capacitor is built before device fabrication. This facilitates the implementation of a process fully compatible with logic technology, as transistor performance does not degrade because of capacitor fabrication, and design rules for back-end-of-lines (or metal wiring) remain the same as that of the logic technology. This results in an ideal technology solution for DRAM integration on a high-performance logic chip.
Challenges in Design, Data Placement, Migration and Power-Performance Trade-offs in DRAM-NVM-based Hybrid Memory Systems
Published in IETE Technical Review, 2023
Sadhana Rai, Basavaraj Talawar
DRAM has been the prevalent main memory technology due to its performance benefits over computing technologies [1–5]. Technologies like big data [6,7], cloud computing [7,8], Artificial intelligence Internet of Things (AIoT) [9] and multiprogram workloads [10] have high memory footprints, demand low energy consumption and high throughput [11]. The performance and energy constraints of these technologies can be met by memory systems with high memory capacity, low access latency and improved memory energy efficiency. Conventional DRAM cannot satisfy all these requirements due to its scalability issue and high static power consumption [3,9,12–14]. Previous research have shown that DRAM consumes around 40% of energy in modern systems [15–17]. Technologies like 3D-stacked DRAM, Reduced-Latency DRAM (RLDRAM), Embedded DRAM (eDRAM) and Low-Power DRAM (LPDRAM) are some of the alternatives fabricated using DRAM circuit design, architectures and interfaces to serve the need of emerging applications [18]. Apart from these DRAM alternatives, non-volatile memory technologies like PCM, ReRAM, STT-RAM and Intel’s 3D XPoint technology are considered to be potential candidates for future main memory systems. These alternatives deliver operational speeds comparable with DRAM as well as high capacity, low cost, non-volatility, low static energy consumption. Despite these advantages, the new memory technologies cannot completely replace existing DRAM because of their drawbacks, such as limited capacity and high cost (3D-stacked DRAM [18,19]), high energy consumption (RLDRAM [20]) and poor access latencies (LPDRAM [21]), eDRAM provides excellent bandwidth with reduced power consumption when compared to conventional DRAM, but it has limited control on the internal behavior and has limited endurance 103× less than normal DRAM [22,23], while NVMs have high write energy, high write latency and limited endurance [3,7,18,24,25]. Hence the state-of-the-art systems are replacing traditional monolithic DRAM-based memory subsystems with tiered memory architectures, also known as hybrid memory systems. The key idea is to integrate DRAM with its alternatives or to combine DRAM with emerging non-volatile memory (NVM) so that memory with high capacity can be obtained without compromising the cost and performance [18]. In this paper, we discuss the challenges in design, data placement, migration and power performance trade-offs in hybrid memory comprising DRAM and NVM.