Explore chapters and articles related to this topic
Shared Memory Architecture
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
In a multiprocessor system with different cores or processors, in which each processor has a separate local cache, the same memory block can be held as a copy in the local cache of multiple processors. If one or more of the processors updates a copy of a memory block in their local cache, the other copies become invalid and contain inconsistent values. Cache coherence protocols capture the behavior of a memory system for read and write accesses performed by different processors to the same memory location at the same time, by using the order of the memory accesses as a relative time measure, not the physical point in time at which the memory accesses are executed by the processors.
S
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
snoop in hardware systems, a process of examining values as they are transmitted in order to possibly expedite some later activity. snooping bus a multiprocessor bus that is continuously monitored by the cache controllers to maintain cache coherence. snow noise noise composed of small, white marks randomly scattered throughout an image. Television pictures exhibit snow noise when the reception is poor. SNR See signal-to-noise ratio.
Safety Certification of Mixed-Criticality Systems
Published in Hamidreza Ahmadian, Roman Obermaisser, Jon Perez, Distributed Real-Time Architecture for Mixed-Criticality Systems, 2018
I. Martinez, G. Bouwer, F. Chauvel, Ø. Haugen, R. Heinen, G. Klaes, A. Larrucea Ortube, C. F. Nicolas, P. Onaindia, K. Pankhania, J. Perez, A. Vasilevskiy
As described in [323, 374], cache coherency is the consistency of shared resource data that ends up stored in multiple local caches (e.g., L1 and L2 cache). A coherency mechanism stores the copies of the data saved in several caches. When one copy of the data is modified, the other copy should also be modified. Otherwise, a coherency inconsistency arises.
Linear approximation fuzzy model for fault detection in cyber-physical system for supply chain management
Published in Enterprise Information Systems, 2021
The network on a chip is an important part of cyber-physical system-on-chip tiled distributed architecture. The scheme is similar to three NoCs in which tiles are connected in a 2D mesh topology to allow communication between tiles for cache coherence, input/output, and memory traffic. Furthermore, the NoCs keep a point by point order in order to keep the traffic destined for other clusters consistent via the NIC chipset bridge. The implementation of NoCs in supply chain management involves physical networks, credit-based flow control, and wormhole routing for deadlock-free operation. Such on-chip sensing networks for evolving multi-core architectures are crucial to satisfy the diverse application such as supply chain and its requirements in terms of optimal output, predictability of faults, recovery, and power consumption at runtime and over the broad range of qualitative specifications. Precisely, networks with on-chip sensors should be viewed in relation to flexible topologies, network management and sensor location, sensor and network performance, on-chip data network co-operation, and bandwidth, latency, and area power interfaces. Figure 7 shows the 2D mesh topology.
Optimised memory allocation for less false abortion and better performance in hardware transactional memory
Published in International Journal of Parallel, Emergent and Distributed Systems, 2020
Hardware transactional memory is based on the existing cache coherence protocol [4]. A cache line is cached in a thread’s L1 private cache upon its first access. Then if other threads access the same cache line, according to cache coherence protocol, the cache line in the first thread’s L1 private cache would be invalidated. The hardware transactional memory can achieve both memory access monitoring and conflict detection based on this information with almost no overhead. However, there are certain limitations in hardware transactional memory. For example, a cache line may be evicted when there is no conflict (e.g. cache capacity pressure, process switching) and this will also lead to the abortion of a hardware transaction. Thus current hardware transaction guarantees no success even when there is no conflict and it often acts as a fast path of traditional software transactional memory. Figure 1(c) shows the code of using a hardware transaction. The attempts in the code mean how many times we try hardware transactional memory before falling back to software transactional memory. If luckily the hardware transaction is a success, then there is no overhead of memory access monitoring and conflict detection.
Profile-guided optimisation for indirect branches in a binary translator
Published in Connection Science, 2022
Jyun-Siang Huang, Wuu Yang, Yi-Ping You
In profile-guided platform-dependent hyperchaining, the TPC is patched into the code cave of the target binary at execution time. The RISC-V board we use provides an instruction cache which holds instructions. However, it does not guarantee instruction cache coherence. This creates an (instruction) cache coherence problem in our binary translator.