Explore chapters and articles related to this topic
F
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
fetch policy policy to determine when a block should be moved from one level of a hierarchical memory into the next level closer to the CPU. There are two main types of fetch policies: "fetch on miss" or "demand fetch policy" brings in an object when the object is not found in the top-level memory and is required; "prefetch" or "anticipatory fetch policy" brings in an object before it is required, using the principle of locality. With a "fetch on miss" policy, the process requiring the objects must wait frequently when the objects it requires are not in the top-level memory. A "prefetch" policy may minimize the wait time, but it has the possibility of bringing in objects that are never going to be used. It also can replace useful objects in the top-level memory with objects that are not going to be used. See also cache and virtual memory. The prefetching may bring data directly into the relevant memory level, or it may bring it into an intermediate buffer. fetch-and-add instruction for a multiprocessor, an instruction that reads the content of a shared memory location and then adds a constant specified in the instruction, all in one indivisible operation. Can be used to handle multiprocessor synchronizations. fetch-execute cycle the sequence of steps that implement each instruction in a computer instruction set. A particular instruction is executed
F
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
There are two main types of fetch policies: “fetch on miss” or “demand fetch policy” brings in an object when the object is not found in the top-level memory and is required; “prefetch” or “anticipatory fetch policy” brings in an object before it is required, using the principle of locality. With a “fetch on miss” policy, the process requiring the objects must wait frequently when the objects it requires are not in the top-level memory. A “prefetch” policy may minimize the wait time, but it has the possibility of bringing in objects that are never going to be used. It can also replace useful objects in the top-level memory with objects that are not going to be used. See also cache and virtual memory. The prefetching may bring data directly into the relevant memory level, or it may bring it into an intermediate buffer.
Towards Multicores: Technology and Software Complexity
Published in Marcello Coppola, Miltos D. Grammatikakis, Riccardo Locatelli, Giuseppe Maruccia, Lorenzo Pieralisi, Design of Cost-Efficient Interconnect Processing Units, 2020
Marcello Coppola, Miltos D. Grammatikakis, Riccardo Locatelli, Giuseppe Maruccia, Lorenzo Pieralisi
However, use of aggressively relaxed consistency models is arguable, since with the advent of speculative execution, these models do not give a sufficient performance boost to justify exposing their complexity to low-level software authors [148]. Even without instruction reorders, at least three compiler-based optimization methods exist: prefetching, multithreading, and caching. Data prefetching based on look-ahead or long cache lines exploit spatial locality, i.e. memory accesses that refer to nearby memory words, thus reducing application latency. Unlike multithreading, prefetching relies on the compiler or application user to insert explicit prefetch instructions for data cache lines that would otherwise cause a miss.
Trends in data replication strategies: a survey
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Stavros Souravlas, Angelo Sifaleras
For evaluation, the authors used a trie, a structure based on a tree. The authors performed a series of tests for spatial access patterns that indicate that, predictive prefetching can significantly reduce I/O latencies and the total runtime, as far as the benchmarks used. However, there were two drawbacks: (1) these tests represented a system with many limitations compared to actual computer workloads and (2) The tests repeatedly used exactly the same data patterns.