Explore chapters and articles related to this topic
Parallel Programming Languages and Techniques
Published in Hojjat Adeli, Parallel Processing in Computational Mechanics, 2020
Prasad R. Vishnubhotla, Hojjat Adeli
An example of a parallel processing support package is the Encore Parallel Threads (EPT) package available on the Encore Multimax shared-memory machines (Encore, 1988; Adeli and Kamal, 1989). A thread is a unit of execution that is independent of other similar units (threads), yet it can execute concurrently with them. The concept of threads was first developed by Doeppner (1987). The notion of a thread is quite different from and independent of that of a processor. For example, one can have many threads running on one processor or concurrently on several processors. This results in a high level of abstraction to the programmer. It separates him or her from such details as how many processors are available. Concern must be limited to developing a certain number of threads. Encore Parallel Threads provides a set of constructs necessary for implementing the threads on an Encore Multimax. It can be used with the C programming language under the UMAX operating system. EPT provides groups of constructs that support the creation of threads, synchronization of threads through monitors or semaphores, creation of thread control blocks, raising exceptions, handling interrupts, and performing shared I/O. Adeli and Kamal (1990a,1990b, 1991a,1991b) developed parallel algorithms for partitioning, analysis, and optimization of large structures and implemented them in C on an Encore Multimax using the EPT.
Multithreading in LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
Many LabVIEW programmers are familiar with the concept of a “race condition.” Multithreading in general is susceptible to this problem. Fortunately, when writing LabVIEW code, a programmer will not create a race condition with LabVIEW’s threads. A race condition in multithreaded code happens when one thread requires data that should have been modified by a previous thread. Additional information on LabVIEW race conditions can be found in the LabVIEW documentation or training course materials. LabVIEW execution systems are not susceptible to this problem. The dedicated folks at National Instruments built the threading model used by LabVIEW’s execution engine to properly synchronize and protect data. LabVIEW programmers cannot cause thread-based race conditions; however, there is still plenty of room for LabVIEW programmers to create race conditions in their own code.
Information Technologies of Randomized Machine Learning
Published in Yuri S. Popkov, Alexey Yu. Popkov, Yuri A. Dubnov, Alexander Yu. Mazurov, Entropy Randomization in Machine Learning, 2023
Yuri S. Popkov, Alexey Yu. Popkov, Yuri A. Dubnov, Alexander Yu. Mazurov
For safe execution, the processes of an operating system are isolated from each other in computer memory. In particular, this feature imposes some restrictions on their interaction, which are implemented using special mechanisms. A concrete form of these mechanisms bears computational cost, as it is necessary to guarantee a safe functioning of all processes. Thus, a process is a “heavy” object consuming considerable computational resources for their creation, functioning and elimination. At the same time, a thread is a “light” program object that operates within the framework of the resource sharing model of a process (memory, file descriptors, etc.). Hence, threads do not need much resources for their functioning. A process contains at least one thread.
Parallel computing in railway research
Published in International Journal of Rail Transportation, 2020
Qing Wu, Maksym Spiryagin, Colin Cole, Tim McSweeney
To better comprehend multithreading, one should understand the differences between a process and a thread. A simple explanation is that a thread is a part of a process and a process can have multiple threads. Parallel computing can be conducted among a number of processes as well as a number of threads. MPI and OpenMP can deal with both thread and process levels of parallel computing. In this section, the term ‘multithreading’ indicates only thread level parallel computing. There are a number of techniques that can be used to manipulate threads; two fundamental ones are Pthreads [47] in a Unix Operation System (OS) and the Windows Application Programming Interface (WinAPI) threads [48] in Windows OS. As multithreading deals with multiple threads that are in the same process, it understandably uses the shared memory parallel computing model as shown in Figure 3(b). Applications using Pthreads have been reported from [49,50]; and Reference [15] used WinAPI threads. Besides Pthreads and WinAPI threads, JAVA and Matlab also provide high level multithreading API. Several applications [8,51–53] using JAVA have been reported along with a few applications using Matlab [54–56]. Understandably, higher level APIs would be easier to use which can help to cut down the development cost.
Algorithmic Improvements to MCNP5 for High-Resolution Fusion Neutronics Analyses
Published in Fusion Science and Technology, 2018
Scott W. Mosher, Stephen C. Wilson
In a multithreaded application, each executing thread has the same view of memory except for the thread’s local call stack and any data that are explicitly declared to be private to each thread. This is a key advantage of multithreading over MPI-based parallelism. Multithreading enables memory-efficient algorithms where large data structures are shared by all threads. In MCNP, for example, all threads access a shared copy of the continuous-energy cross sections and space- and energy-dependent weight-window parameters. When using shared memory for mutable data structures, such as mesh tally data, care must be taken to avoid race conditions. A race condition occurs when two or more threads access the same memory location concurrently and at least one of the accesses changes the stored value. The result of reading the data is then dependent on the order in which the instructions of the various operating threads happen to be executed. This type of condition can produce unexpected and incorrect results and can even lead to memory corruption. Care must also be taken to avoid thread synchronization errors, which can cause the program to deadlock when two or more threads are blocked waiting for each other to perform some action.
A multi-skilled workforce optimisation in maintenance logistics networks by multi-thread simulated annealing algorithms
Published in International Journal of Production Research, 2021
Hasan Hüseyin Turan, Fuat Kosanoglu, Mahir Atmis
At the beginning, the current temperature T is set to be which is found by the method described in Section 4.1.3 and initial solutions are randomly generated. A thread pool is initialised with the number of worker threads equal to the number of cores available in the system. Each of the threads is submitted with the task of finding better solutions by calling the ThreadNG function described in Algorithm 2. ThreadNG function generates neighbour solution from current solution by uniformly choosing one of the four algorithms described in Section 4.1.2. If the cost of is not smaller than the cost of , then is accepted as the current solution with the uniform probability of: where Δ is the cost difference between current and neighbour solution, and T is the current temperature. If the cost of is smaller than the cost of , is set as the current solution and if it is also smaller than the candidate global minimum cost , then it is set as the candidate global best solution .