Explore chapters and articles related to this topic
Parallel and Distributed Processing
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
Albert I. Reuther, Hahn G. Kim
The wide availability of Pthreads can make it an attractive option for parallel programming. Pthreads was not explicitly designed for parallel programming, however, but rather was designed to provide a general-purpose thread capability. This flexibility results in a lack of structure in managing threads, thus making parallel programming using Pthreads very complex. Because the programmer is responsible for explicitly creating and destroying threads, partitioning data between threads, and coordinating access to shared data, Pthreads and other general-purpose thread technologies are seldom used to write parallel programs. Consequently, other thread technologies designed explicitly for parallel programming, such as OpenMP, have been developed to address these issues.
Modeling Multiprocessor Real-Time Systems at Transaction Level
Published in Katalin Popovici, Pieter J. Mosterman, Real-Time Simulation Technologies, 2017
Giovanni Beltrame, Gabriela Nicolescu, Luca Fossati
Pthreads are a well known concurrent application programming interface (API) and, as part of the POSIX standard, are available for most operating systems (either natively or as a compatibility layer). The Pthread API provides extensions for managing real-time threads, in the form of two scheduling classes:
Parallel Computing Programming Basics
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
POSIX is a standard for Unix-like operating systems—for example, Linux and Mac OS X. In particular, it specifies a API for multithreaded programming called POSIX Threads (Pthreads). The Pthreads API is only available on POSIX systems—Linux, Mac OS X, Solaris, HPUX, and so on.
Choice of parallelism: multi-GPU driven pipeline for huge academic backbone network
Published in International Journal of Parallel, Emergent and Distributed Systems, 2021
Ruo Ando, Youki Kadobayashi, Hiroki Takakura
For handling huge session data on our pipeline, we adopt different kinds of parallelism. Our key finding is that choosing the appropriate parallelism level on each phase of the pipeline is important. Figure 1 depicts our choice of parallelism with the constraints shown in the left part. We have adopted four kinds of parallelism: machine level, process level, thread-level, and vector level. In this section, we introduce Thrust Template Library (vector level), Intel TBB (thread level), and POSIX Pthreads (thread level).
Parallel computing in railway research
Published in International Journal of Rail Transportation, 2020
Qing Wu, Maksym Spiryagin, Colin Cole, Tim McSweeney
According to Barney [47], the Pthreads technique has the advantage of light weight which means less overhead cost (computing time in this case) in parallel computing coordination and communications. However, one inconvenient point about Pthreads is that it was developed in C/C++ programming language. For applications in other programming languages, such as FORTRAN, an additional interface must be developed.