Explore chapters and articles related to this topic
Parallel Programming Languages and Techniques
Published in Hojjat Adeli, Parallel Processing in Computational Mechanics, 2020
Prasad R. Vishnubhotla, Hojjat Adeli
An example of a parallel processing support package is the Encore Parallel Threads (EPT) package available on the Encore Multimax shared-memory machines (Encore, 1988; Adeli and Kamal, 1989). A thread is a unit of execution that is independent of other similar units (threads), yet it can execute concurrently with them. The concept of threads was first developed by Doeppner (1987). The notion of a thread is quite different from and independent of that of a processor. For example, one can have many threads running on one processor or concurrently on several processors. This results in a high level of abstraction to the programmer. It separates him or her from such details as how many processors are available. Concern must be limited to developing a certain number of threads. Encore Parallel Threads provides a set of constructs necessary for implementing the threads on an Encore Multimax. It can be used with the C programming language under the UMAX operating system. EPT provides groups of constructs that support the creation of threads, synchronization of threads through monitors or semaphores, creation of thread control blocks, raising exceptions, handling interrupts, and performing shared I/O. Adeli and Kamal (1990a,1990b, 1991a,1991b) developed parallel algorithms for partitioning, analysis, and optimization of large structures and implemented them in C on an Encore Multimax using the EPT.
System executions
Published in Uri Abraham, Models for Concurrency, 2020
At this stage, programs and procedures are understood intuitively. A process is an execution of a program (or a protocol), and several processes can run concurrently. Among the program’s instructions we may find read and write operations on registers, and connected with each register is its type of values that can be used for reading and writing. We distinguish between variables and registers. If v is a variable and τ an expression, then “v:= τ” is the instruction to assign value τ to v, but if V is a name of a register then some special instruction such as Writev(τ) (rather than V:= τ) is used to write r onto V. Similarly, a special instruction such as Ready(x) is used to assign to a (local) variable x a value obtained by reading register V. A formal definition of protocol languages and a more careful distinction between external operations (such as read/write on registers) and assignment instructions is given in Chapter 3. Here we are interested in the semantics of registers independently of their usage in programming. We ask: what does it mean to say that a register (or any communication device) is operating correctly?
Digital Systems
Published in Wai-Kai Chen, Analog and VLSI Circuits, 2018
Festus Gail Gray, Wayne D. Grover, Josephine C. Chang, Bing J. Sheu, Roland Priemer, Rung Yao, Flavio Lorenzelli
Real-time high throughput rate processing constitutes one of the most demanding aspects of modern digital signal processing. In order to achieve the desired throughput rate, various forms of concurrent operations are needed. “Concurrency” denotes the ability of a processing system to perform more than one operation at a given time. Concurrency can be achieved through either parallelism or pipelining, or both. “Parallelism” addresses concurrency by replicating some desired processing functions many times. High throughput rate is achieved by having simultaneous operations performed by these functions on different parts of the program. On the other hand, “pipelining” tackles concurrency by breaking some demanding part of the task into many smaller simpler pieces, with many corresponding processing elements (PEs), so that processing can be performed in a pipeline manner. This digital pipe is arranged so that it is capable of processing the instructions and data independent of the number of PEs in the pipe. Then, high throughput rate can be achieved by having fast PEs in the pipe. As we shall see, a “systolic array” can exploit both the parallelism and pipelining capability of some algorithms.
Parallel co-simulation of heavy-haul train braking dynamics with strong nonlinearities
Published in Mechanics Based Design of Structures and Machines, 2023
Qing Wu, Colin Cole, Maksym Spiryagin, Pengfei Liu
Parallel computing uses multiple computer cores to process multiple computing tasks concurrently. Due to the utilization of multiple computer cores and the resulting ability for information communication, parallel computing also offers the capability of co-simulations (Spiryagin et al. 2019). Many computing techniques can be used to achieve parallel commuting, and they can be grouped into distributed memory methods and shared memory methods (Wu et al. 2020). The distributed memory methods, such as the Message Passing Interface (MPI) technique, have their own memory fields for each computer core. Inter-core communications are achieved by sending and receiving information among different computer cores. On the other hand, the shared memory methods, such as Open Multi-Processing (OpenMP), use the same memory fields for the communication of information. Different computer cores read and write to the same memory fields and are therefore able to know the status of other computer cores.
User-Defined Foot Gestures for Eyes-Free Interaction in Smart Shower Rooms
Published in International Journal of Human–Computer Interaction, 2022
Zhanming Chen, Huawei Tu, Huiyue Wu
In recent years, gesture-based interaction techniques have attracted significant interest from research communities and commercial sectors worldwide. However, most existing gesture-based studies focus on hand gesture input methods, leaving foot-gesture-based interfaces underexplored. Foot gestures can be very useful, especially in scenarios where the users need to perform concurrent tasks while their hands are being occupied for other interaction tasks. A concurrent task is one that combines two or more tasks in such a manner that each component task is performed independently and parallelly (Wu et al., 2021). Another advantage of foot-gesture-based interaction is that it does not require the engagement of the user’s visual system into the interaction. A typical example is the shower scenario, in which users’ hands are being used to wash their hair but they, at the same time, need to perform other interactive tasks (e.g., controlling water volume and/or temperature) with their eyes closed when water and shampoo liquid flow along with their head to eye area. One possible solution to address this problem is to use eyes-free (Findlater et al., 2011; Wu et al., 2021; Yan et al., 2018) (rather than eyes-engaged) foot-gesture-based interaction techniques that allow the users to interact with the smart shower system without any visual involvement.
Algorithmic Improvements to MCNP5 for High-Resolution Fusion Neutronics Analyses
Published in Fusion Science and Technology, 2018
Scott W. Mosher, Stephen C. Wilson
In a multithreaded application, each executing thread has the same view of memory except for the thread’s local call stack and any data that are explicitly declared to be private to each thread. This is a key advantage of multithreading over MPI-based parallelism. Multithreading enables memory-efficient algorithms where large data structures are shared by all threads. In MCNP, for example, all threads access a shared copy of the continuous-energy cross sections and space- and energy-dependent weight-window parameters. When using shared memory for mutable data structures, such as mesh tally data, care must be taken to avoid race conditions. A race condition occurs when two or more threads access the same memory location concurrently and at least one of the accesses changes the stored value. The result of reading the data is then dependent on the order in which the instructions of the various operating threads happen to be executed. This type of condition can produce unexpected and incorrect results and can even lead to memory corruption. Care must also be taken to avoid thread synchronization errors, which can cause the program to deadlock when two or more threads are blocked waiting for each other to perform some action.