Explore chapters and articles related to this topic
Large Graph Computing Systems
Published in Kuan-Ching Li, Hai Jiang, Albert Y. Zomaya, Big Data Management and Processing, 2017
Chengwen Wu, Guangyan Zhang, Keqin Li, Weimin Zheng
Figure 17.13 shows the architecture of FlashGraph. The edge data are stored on SSDs, which are accessed selectively, and a compact edge data format is applied to reduce the I/O amount. The SSDs are managed by SAFS; to improve the performance, an asynchronous user-task I/O interface is added to SAFS, which allows general-purpose computation in the page cache, so the overhead of accessing data in page cache and memory consumption can be reduced. Besides, it overlaps the I/O and computation. The graph engine is responsible for the schedules of vertex programs; to optimize the performance, the engine will merge the adjacent I/O requests of vertex programs, which not only reduces the I/O amount but also performs I/O sequentially. FlashGraph exposes vertex-centric interface to users, with which a variety of graph algorithms can be expressed.
Challenges in Design, Data Placement, Migration and Power-Performance Trade-offs in DRAM-NVM-based Hybrid Memory Systems
Published in IETE Technical Review, 2023
Sadhana Rai, Basavaraj Talawar
CLOCK-based algorithms maintain a circular list of pages and keep track of whether the pages are accessed recently using reference bits. They maintain hands (pointers) to find out which page is to be evicted. Prominent CLOCK-based algorithms that are used in hybrid DRAM-NVM-based memories are CLOCK-DWF [62], CLOCK-HM [12], M-CLOCK [61], AC-CLOCK [60] and TA-CLOCK [55]. Some of these algorithms included migration as well. CLOCK with Dirty bits and Write frequency (CLOCK-DWF [62]) uses write history to predict future writes accurately. The frequency of past writes was a better predictor of future writes. On a read page fault, the page is loaded to PCM (NVM), otherwise to DRAM. This method reduced PCM writes when compared to the previous CLOCK-based algorithms. The major drawback of this approach was that it would trigger migration for every write operation in NVM thereby increasing the number of migrations [3]. CLOCK for page cache in Hybrid memory architecture (CLOCK-HM [12]) combines both frequency and recency. When a page is evicted its information is updated in the MRU position of the history list. If the same page is brought back to the memory, its second chance bit is set, indicating its frequency. If the page whose second chance bit is set is chosen for eviction, the algorithm gives it a second chance without evicting it and resets the second chance bit. On a page fault the history of the page is checked, and if it is found to be write-intensive, it is loaded to DRAM. Otherwise, it can be loaded to either of the two memories. This method also tried to reduce NVM writes as well as unnecessary migrations.
Research on the load balancing strategy for original pages based on cloud storage
Published in Systems Science & Control Engineering, 2019
The storage load is the hard disk occupancy rate; each node obtains the disk occupation through the system call and then takes the information real-time feedback to the management node. The content page analysis is the most onerous link of storage node calculation. The computation load takes the original page cache data quantity to be analysed of the storage nodes as the measurement, and takes the buffer occupancy rate as the calculating load to be sent to the management node at a true time (Harchol Balter & Downey, 1997).