Explore chapters and articles related to this topic
Multithreading in LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
When there are multiple CPUs available to the system, the scheduler will determine which threads get to run on which CPU. Symmetric Multiprocessing (SMP) used in Windows XP Professional allows for threads of the same process to run on different CPUs. This is not always the case. A dual-CPU machine may have threads of different processes running on the pair of CPUs, which is determined by the scheduling algorithm. Some UNIX implementations allow only a process’s threads on a single CPU.
Parallel algorithms for reducing derivation time of distinguishing experiments for nondeterministic finite state machines
Published in International Journal of Parallel, Emergent and Distributed Systems, 2018
Khaled El-Fakih, Gerassimos Barlas, Mustafa Ali, Nina Yevtushenko
In this paper, we focus on reducing the construction time of i-o-successors for each pair of states of a machine using current state-of the art parallel technologies. Towards this end we utilize multicore and many-core architectures in the form of Symmetric MultiProcessing (SMP) CPUs and Graphics Processing Units (GPUs), as well as combinations of these platforms [23] in the form of MPI-communicating network of workstations (NoWs) [24,25]. Our experiments show consistent performance gains that range between 3.7x speedup for an SMP platform, up to 8.28x average speedup for a network of three heterogeneous PCs equipped with three GPUs, relative to a single CPU core. Also our analytical load partitioning framework allows for effective use of all the computational resources by accounting for the communication overhead incurred, as shown in Section 4. For example, our NoW test platform, involves a PC with a Core 2 Q8200 CPU and a PC with a i7-4820 K CPU, machines which are several generations apart. GPUs are a disruptive technology in the sense that they offer computational capabilities beyond the realm of contemporary CPUs [26]. The trade-off is that they require specially designed algorithms in order to take full advantage of their raw hardware resources. In this paper we use two software platforms for GPUs: NVidia’s CUDA [27] and Thrust [28]. Thrust is a lesser known C++ library based on templates, operating in a similar fashion to the STL standard library. Thrust programs require less development effort than CUDA, and they can target multiple hardware platforms (called back-ends) that include SMP CPUs. Thrust is not a free lunch though, as it typically underperforms in comparison to programmer-optimized CUDA programs.