Explore chapters and articles related to this topic
Inter-Task Synchronization and Communication (IPC) Based on Shared Memory
Published in Gedare Bloom, Joel Sherrill, Tingting Hu, Ivan Cibrario Bertolotti, Real-Time Systems Development with RTEMS and Multicore Processors, 2020
Gedare Bloom, Joel Sherrill, Tingting Hu, Ivan Cibrario Bertolotti
A barrier is a synchronization object that enables a number of tasks to wait until all of them have reached a programmer-defined milestone in their execution, and then continue concurrently. This is useful in a lot of different circumstances, for instance: A set of cooperating tasks may need to go through an initialization phase that they perform independently from each other before starting normal operation. A barrier can be used to ensure that all these tasks have completed their initialization—that is, their milestone—before any of them enters normal operational mode.Especially on multicore systems, it may be fruitful to split a time-consuming, computation-intensive job into chunks, to be performed concurrently by N tasks running on different cores. A barrier can guarantee that all tasks have completed their share of the job before they continue with further processing.
L
Published in Philip A. Laplante, Comprehensive Dictionary of Electrical Engineering, 2018
lock localization such as when an electron is trapped by an ionized donor or other ionized potential, or "weak" localization in which it is induced by a "self" interference effect. See also weak localization. lock a synchronization variable, used in sharedmemory multiprocessors, that allows only one processor to hold it at any one time, thus enabling processors to guarantee that only one has access to key data structures or critical sections of code at any one time. lock range the range of frequencies in the vicinity of the voltage controlled oscillator (VCO) freerunning frequency over which the VCO will, once locked, remain synchronized with the signal frequency. Lock range is sometimes called tracking bandwidth. lock-in amplifier a system for detecting weak, noisy periodic signals based on synchronous detection, and incorporating all the other components necessary for recording the amplitude profile of the weak incoming signal, including input AC amplifier, diode or other detectors, low-pass filter, DC amplifier, and any special filters. Such instruments are nowadays constructed with increasing amounts of digital and computerized circuitry, depending on the frequency of operation. lock-out phenomenon exhibited during channel switching that results from a fast automatic gain control (AGC) system interacting with the horizontal automatic frequency control (AFC), thereby reducing the pull-in range of the AFC system. lock-up-free cache See nonblocking cache. locked-rotor torque the torque produced in an induction motor when the rotor is locked and rated AC voltage is applied to the stator. locking See bus locking.
Next Generation Wireless Technologies
Published in K. R. Rao, Zoran S. Bojkovic, Bojan M. Bakmaz, Wireless Multimedia Communication Systems, 2017
K. R. Rao, Zoran S. Bojkovic, Bojan M. Bakmaz
Some distributed applications require the computational processes on different nodes in a WDC network to be synchronized with one another. Synchronization is particularly important when the processes have to interact with one another while executing the application. Synchronization involves the establishment of a temporal relationship between these processes. In WDC networks, synchronization is a challenge when the computational processes that have to be synchronized are heterogeneous in terms of their execution times. The execution time is uncertain when there are other computational processes contending for limited computational resources within each node.
Periodic distributed delivery routes planning subject to operation uncertainty of vehicles travelling in a convoy
Published in Journal of Information and Telecommunication, 2022
Bocewicz Grzegorz, Nielsen Peter, Smutnicki Czeslaw, Pempera Jaroslaw, Banaszak Zbigniew
Also, to avoid the occurrence of blockages of concurrently executed delivery processes, it is necessary to introduce appropriate mechanisms, e.g. employing dispatching rules that synchronize the processes. In the context of cyclic flow being deadlock-free, the NP-hard problem of deadlock handling may be treated as equivalent to the problem of cyclically-executed local processes synchronization, simply because cyclical delivery excludes the appearance of vehicles congestion. Thus the periodicity of the overall distributed delivery network depends on the periodicities of processes carried out by individual platoons. Conversely, the delivery period in a network formed by a set of periodically routed platoons depends on the cycle of this network. Consequently, the throughput of a delivery network is maximized by minimization of its cycle time.
Asynchronous Wrapper-Based Low-Power GALS Structural QDMA
Published in IETE Journal of Research, 2022
B.K. Vinay, S. Pushpa Mala, S. Deekshitha
Synchronization is achieved using D-latch followed by a T-flip flop to avoid metastability, circumventing system failure. The signal reaching D-latch is asynchronous, and this signal will not reach T-flip-flop if the signal is metastable. The signal resolves from a metastable state and contains logic levels, further passes through the T-flip flop, which gives the output with respect to the synchronized signal. These circuits are called Synchronizer circuits and combine D-latch and T-flip flop that convert asynchronous signal to synchronous signal, thus eliminating the issue of metastability. These synchronizers are low power strategies, consume less area, are highly reliable with high MTBF (Mean Time Between Failures), and have low latency. However, synchronization between wrappers is accomplished using handshake signals. In this proposed methodology the synchronizer circuit of [19] is replaced with FIFO-based synchronizer ,which reduces bandwidth and ensures communication to be reliable. FIFO-based synchronizer will ensure the matching of frequency rate.
Algorithmic Improvements to MCNP5 for High-Resolution Fusion Neutronics Analyses
Published in Fusion Science and Technology, 2018
Scott W. Mosher, Stephen C. Wilson
In a multithreaded application, each executing thread has the same view of memory except for the thread’s local call stack and any data that are explicitly declared to be private to each thread. This is a key advantage of multithreading over MPI-based parallelism. Multithreading enables memory-efficient algorithms where large data structures are shared by all threads. In MCNP, for example, all threads access a shared copy of the continuous-energy cross sections and space- and energy-dependent weight-window parameters. When using shared memory for mutable data structures, such as mesh tally data, care must be taken to avoid race conditions. A race condition occurs when two or more threads access the same memory location concurrently and at least one of the accesses changes the stored value. The result of reading the data is then dependent on the order in which the instructions of the various operating threads happen to be executed. This type of condition can produce unexpected and incorrect results and can even lead to memory corruption. Care must also be taken to avoid thread synchronization errors, which can cause the program to deadlock when two or more threads are blocked waiting for each other to perform some action.