Explore chapters and articles related to this topic
Multicores in Embedded Systems
Published in Gedare Bloom, Joel Sherrill, Tingting Hu, Ivan Cibrario Bertolotti, Real-Time Systems Development with RTEMS and Multicore Processors, 2020
Gedare Bloom, Joel Sherrill, Tingting Hu, Ivan Cibrario Bertolotti
The cache coherency logic is responsible of maintaining cache coherency by observing all cache and memory transactions and altering the state of cache lines according to a well-defined protocol. Here, we will summarily discuss the MESI protocol, first discussed in Reference [94], because it is used in various forms and variants in several processor architectures for general-purpose and embedded computing—for instance, the MOESI protocol adopted in the ARMv8-A architecure [14] and the MESIF protocol developed by Intel [67].
Safety Certification of Mixed-Criticality Systems
Published in Hamidreza Ahmadian, Roman Obermaisser, Jon Perez, Distributed Real-Time Architecture for Mixed-Criticality Systems, 2018
I. Martinez, G. Bouwer, F. Chauvel, Ø. Haugen, R. Heinen, G. Klaes, A. Larrucea Ortube, C. F. Nicolas, P. Onaindia, K. Pankhania, J. Perez, A. Vasilevskiy
As described in [323, 374], cache coherency is the consistency of shared resource data that ends up stored in multiple local caches (e.g., L1 and L2 cache). A coherency mechanism stores the copies of the data saved in several caches. When one copy of the data is modified, the other copy should also be modified. Otherwise, a coherency inconsistency arises.
Parallel Computing Programming Basics
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
When each processor has its own cache memory, it is also necessary to maintain cache coherence between the individual processors. This is done by using a directory scheme similar to the method described in Subsection 10.4.2. Thus, currently, a more accurate nomenclature for DSM computers is cache coherent non-uniform memory access parallel computer (CC-NUMA)
A parallel computing framework for solving user equilibrium problem on computer clusters
Published in Transportmetrica A: Transport Science, 2020
Xinyuan Chen, Zhiyuan Liu, Inhi Kim
Apart from the problem decomposition, the implementation of parallel algorithms is also an important aspect in parallel UE computation which is not well investigated. Most of existing studies are executed in a workstation with limited computing cores and shared memory mode, which cannot fully exploit the parallel-computing performance. Compared with shared memory architecture, the distributed memory architecture is advantageous in the following three aspects: (1) scalable with the number of processors; (2) fully take use of commodity, off-the-shelf processors and networking; and (3) each processor can rapidly access its own memory without interference and without the overhead incurred with trying to maintain global cache coherency (Barney 2019). Therefore, it is timely to discuss the implementation of parallel algorithms in a distributed computing environment to achieve higher level of concurrency and better performance.