Explore chapters and articles related to this topic
Cache and Memory
Published in Heqing Zhu, Data Plane Development Kit (DPDK), 2020
In the computing system, software uses the virtual memory address, not the physical memory address. Memory management and paging technology has been widely used for address translation. System memory is organized in pages, and the traditional memory page size is 4 KB. HugePage is later introduced and Linux supports the hugepage size as 2 MB or 1 GB. The memory translation is indeed a multilevel page table lookup. TLB (translation lookaside buffer) is part of the CPU. It can speed up the memory address translation, and it is a cache for virtual memory to physical memory. On a given virtual address, if the entry resides in the TLB, the physical address will be found immediately on a given virtual address. Such a match is known as a TLB hit. However, if the address is not in the TLB (TLB miss), the CPU may do the page walk to finish the address translation, which takes long cycles as it may do the multiple memory references; if the page table is not available in cache, the miss will lead to the memory access, which depends on the access latency, potentially goes up to hundreds of cycles.
Microcontroller Hardware
Published in Syed R. Rizvi, Microcontroller Programming, 2016
Previously, we defined the terms on-chip and off-chip memory. Recall that an on-chip memory refers to any memory that physically exists on the microcontroller itself. Also, the memory that is used externally when the microcontroller works in expanded mode for special cases is called expanded off-chip memory. But what is a memory? A memory refers to computer components and recording media that retain digital data used for computing for some interval of time. As any storage has a location and an address to that location, a memory also has its location and address on the hardware. A memory address is an identifier for a memory location at which a microcontroller program or a hardware device can store data and later retrieve it. Generally, this is a binary number from a finite, monotonically ordered sequence that uniquely describes the memory itself. The HC11 uses a 16-bit address such that numbers of a unique address are equal to 2n, where n is the number of address bits in the system. Thus, with n = 16, 64K unique memory locations are possible. Figure 3.18 illustrates the memory map on the HC11.
Integrating memory-mapping and N-dimensional hash function for fast and efficient grid-based climate data query
Published in Annals of GIS, 2021
Mengchao Xu, Liang Zhao, Ruixin Yang, Jingchao Yang, Dexuan Sha, Chaowei Yang
In LotDB, data are stored in the secondary storage system, and the access to it is done by utilizing memory-mapping technology and through page files. This technology is widely used in database systems like LMDB and MongoDB. Specifically, instead of loading the whole file into memory, the file handler maps the file to virtual memory as a big array and assign a virtual memory address to each page file without loading any actual data into the memory other than file’s metadata. When a data access call is made for a page file, it will cause a page fault and enable read/write of the secondary storage. In this way, bytes are copied to actual memory addresses directly, no need to go through disk caches as the standard open/write will do. In addition, by utilizing memory-mapping of arrays, LotDB could exceed the memory cap for accessing large data files and makes it possible for LotDB accessing big arrays without tiling. Meanwhile, when integrating with the n-dimensional hash function, the array indexes could be virtually calculated with low costs and could increase data retrieve speed exponentially when compared with traditional database solutions.
Parallel computing in railway research
Published in International Journal of Rail Transportation, 2020
Qing Wu, Maksym Spiryagin, Colin Cole, Tim McSweeney
Having been able to programme for parallel computing, two other factors that also need to be paid attention are reliability and flexibility. Compared with conventional serial codes, the reliability issue regarding parallel codes is mainly about the race condition. The race condition refers to the fact that, once a parallelised computing task is commenced, the computing unit will try to finish that task as soon as possible [6]. Without proper computing process control, several bugs that do not exist in serial computing could be caused. For example, two computing units may try to write to the same memory address (the same shared parameter) and different computing units may write to and read the same memory address in a random sequence rather than a specific sequence as required. Despite the existence of the race condition, it can be well controlled by using good synchronisation methodologies [6] and special programming techniques [45].