Explore chapters and articles related to this topic
P
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
page fault event that occurs when the processor requests a page that is currently not in main memory. When the processor tries to access an instruction or data element that is on a page that is not currently in main memory, a page fault occurs. The system must retrieve the page from secondary storage before execution can continue. page frame a contiguous block of memory locations used to hold a page. See also virtual memory. page miss penalty when a page miss occurs, the processor will manage the load of the requested page as well as the potential replacement of another page. The time involved, which is entirely devoted to the page miss, is referred to as the page miss penalty. page offset the page offset is the index of a byte or a word within a page, and is calculated as the physical as well as virtual address modulus of the page size. page printing a printing technique where the information to be printed on a page is electronically composed and stored before shipping to the printer. The printer then prints the full page nonstop. Printing speed is usually given in units of pages per minute (ppm). page replacement at a page miss, when a page will be loaded into the main memory, the main memory might have no space left for that page. To provide space for that new page, the processor will have to choose a page to replace. page table a mechanism for the translation of addresses from logical to physical in a processor equipped with virtual memory capability. Each row of the page table contains a reference to a
Computer memory systems
Published in Joseph D. Dumas, Computer Architecture, 2016
Sometimes, because the entire program is not loaded into main memory at once, a reference is made to a page that is not present in main memory. This situation is known as a page fault. The memory access cannot be completed, and the MMU interrupts the operating system to ask for help. The operating system must locate the requested page in secondary memory, find an available page frame in main memory (displacing a previously loaded page if memory is full), communicate with the disk controller to cause the page to be loaded, and then restart the program that caused the page fault. To keep the entire system from stalling while the disk is accessed, the operating system will generally transfer control to another process. If this second process has some pages already loaded in main memory, it may be able to run (and thus keep the CPU busy) while the first process is waiting for its page to load. If the second process also encounters a page fault (or has to wait for I/O, etc.), then a third process will be run, and so on.
Memory Organisation
Published in Pranabananda Chakraborty, Computer Organisation and Architecture, 2020
On every instruction, switching and manipulating a linked list is prohibitively slow, even when implemented in hardware. However, there are other ways to implement LRU with special hardware. This method requires to equip the hardware with a 64-bit counter, C, that is automatically incremented after each instruction. Furthermore, each page table entry must also have a field large enough to store the contents of this counter C. After each memory reference, the current value of C is stored in the page table entry for the page just referenced. When a page fault occurs, the operating system examines all the counters in the page table to find the lowest one. That page is the least recently used, and is selected to be a victim for replacement.
Integrating memory-mapping and N-dimensional hash function for fast and efficient grid-based climate data query
Published in Annals of GIS, 2021
Mengchao Xu, Liang Zhao, Ruixin Yang, Jingchao Yang, Dexuan Sha, Chaowei Yang
In LotDB, data are stored in the secondary storage system, and the access to it is done by utilizing memory-mapping technology and through page files. This technology is widely used in database systems like LMDB and MongoDB. Specifically, instead of loading the whole file into memory, the file handler maps the file to virtual memory as a big array and assign a virtual memory address to each page file without loading any actual data into the memory other than file’s metadata. When a data access call is made for a page file, it will cause a page fault and enable read/write of the secondary storage. In this way, bytes are copied to actual memory addresses directly, no need to go through disk caches as the standard open/write will do. In addition, by utilizing memory-mapping of arrays, LotDB could exceed the memory cap for accessing large data files and makes it possible for LotDB accessing big arrays without tiling. Meanwhile, when integrating with the n-dimensional hash function, the array indexes could be virtually calculated with low costs and could increase data retrieve speed exponentially when compared with traditional database solutions.
Challenges in Design, Data Placement, Migration and Power-Performance Trade-offs in DRAM-NVM-based Hybrid Memory Systems
Published in IETE Technical Review, 2023
Sadhana Rai, Basavaraj Talawar
CLOCK-based algorithms maintain a circular list of pages and keep track of whether the pages are accessed recently using reference bits. They maintain hands (pointers) to find out which page is to be evicted. Prominent CLOCK-based algorithms that are used in hybrid DRAM-NVM-based memories are CLOCK-DWF [62], CLOCK-HM [12], M-CLOCK [61], AC-CLOCK [60] and TA-CLOCK [55]. Some of these algorithms included migration as well. CLOCK with Dirty bits and Write frequency (CLOCK-DWF [62]) uses write history to predict future writes accurately. The frequency of past writes was a better predictor of future writes. On a read page fault, the page is loaded to PCM (NVM), otherwise to DRAM. This method reduced PCM writes when compared to the previous CLOCK-based algorithms. The major drawback of this approach was that it would trigger migration for every write operation in NVM thereby increasing the number of migrations [3]. CLOCK for page cache in Hybrid memory architecture (CLOCK-HM [12]) combines both frequency and recency. When a page is evicted its information is updated in the MRU position of the history list. If the same page is brought back to the memory, its second chance bit is set, indicating its frequency. If the page whose second chance bit is set is chosen for eviction, the algorithm gives it a second chance without evicting it and resets the second chance bit. On a page fault the history of the page is checked, and if it is found to be write-intensive, it is loaded to DRAM. Otherwise, it can be loaded to either of the two memories. This method also tried to reduce NVM writes as well as unnecessary migrations.
A mathematical multi-dimensional mechanism to improve process migration efficiency in peer-to-peer computing environments
Published in Cogent Engineering, 2018
Ehsan Mousavi Khaneghah, Reyhaneh Noorabad Ghahroodi, Amirhosein Reyhani ShowkatAbad
As Figure 3 indicates, the first process will be suspended. Execution and controlling states with some parts of address space, file descriptors, and dirty file cache blocks will be sent to the destination machine, but in this strategy, code and the stack will not be transferred. After this transmission, the migrated process will resume on the destination machine. If any page fault event occurs during the execution, the needed codes will be sent to the destination. The destination machine can send a request, and the required codes and stacks can be received from the source machine.