Explore chapters and articles related to this topic
Multithreading in LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
Priority and scheduling are different for Pthreads; Pthreads have defined scheduling policies: round robin; first-in, first-out; and others. The FIFO policy lets a thread execute until it completes its execution or becomes blocked. This policy is multitasking by any other name, because there is no preemption involved. The round-robin policy is preemptive multithreading. Each thread is allowed to execute for a maximum amount of time, a unit referred to as a “quantum.” The time of a quantum is defined by the vendor’s implementation. The “other” policy has no formal definition in the POSIX standard. This is an option left up to individual vendors. Pthreads expand on a concept used in UNIX called “forking.” A UNIX process may duplicate itself using a fork command. Many UNIX daemons such as Telnet use forking. Forking is not available to the Win32 programmer. A process that generates the fork is called the Parent process, while the process that is created as a result of the fork command is referred to as the Child process. The Child process is used to handle a specific task, and the Parent process typically does nothing but wait for another job request to arrive. This type of multitasking has been used for years in UNIX systems.
A product-process-resource based formal modelling framework for customized manufacturing in cyber-physical production systems
Published in International Journal of Computer Integrated Manufacturing, 2022
Ge Wang, Di Li, Yuqing Tu, Chunhua Zhang, Fang Li, Shiyong Wang
In the FD, products represent substances that have market demand, as well as intermediate forms in the production process and raw material purchasing. Products refer not only to product types, but can also represent individual artefacts (Pfrommer, Schleipen, and Beyerer 2013). Processes are associated with a set of attributes that are used to describe a single process realization of a product and the demand for it, and refer to manufacturing, logistics, or other production-related processes, such as information exchange and reprocessing. The process can form a hierarchical structure; for instance, a parent process contains child processes, and a child process is a subset of the parent process. Resources are entities involved in process execution, which can be individual machines, such as packaging devices, or specific machines bundled together, such as robots with fixtures, or a combination of them (Jarvenpaa, Siltala, and Lanz 2016). Moreover, resources can also mean providing equipment for a certain process.
Parallel computing solutions for Markov chain spatial sequential simulation of categorical fields
Published in International Journal of Digital Earth, 2019
Weixing Zhang, Weidong Li, Chuanrong Zhang, Tian Zhao
Figure 6 and Table 2 show the amounts of elapsed time spent by all parallel solutions on the land cover post-classification case by assuming the elapsed time of the nonparallel solution to be 1. The MP solution was used to test the effect of parallel computing at realization level by using multiple CPU cores. With an increasing number of CPU cores, the computational performance of the coMCRF model was improved gradually from 1.88× (at 2 CPU cores) to a peak of 16.54× (at 20 CPU cores). Because nearest neighbor searching accounts for only less than half of the elapsed time in nonparallel sequential simulation, this solution improved the total computational efficiency by 1.8×, which is significant. Further increasing the number of CPU cores (i.e. increasing the size of the parallel computing group) resulted in little benefit, which can be explained by slow spawning processes and I/O issues (Figure 6(c)). In MP-based parallel computing solutions (including MP and MP-GNNS), the parent process starts a new child process for each realization from the current parallel computing group sequentially, including both starting a fresh interpreter process and inheriting necessary resources from the parent process. The program cannot start simulating the next group until the last realization in the current group is completed. As a result, increasing the number of CPU cores to some extent may make the total execution time of a stochastic simulation ascend. This impact becomes worse for MP-GNNS solutions because data transferring between a CPU host and a GPU device for each realization also costs time (Figure 6(c)). The GNNS solution was used to examine the effect of the parallel nearest neighbor searching at node level. More practical parallel computing solutions are MP-GNNS and GA-GNNS. The MP-GNNS solution combined the MP solution and the GNNS solution, and it obtained an optimal speedup at 22.76× when 20 CPU cores were used. The GA-GNNS solution combined the GA algorithm and the GNNS solution, both realizing parallel computing at node level. With GA, this solution obtained an optimal speedup at 83.79× when the number of threads was set to 512. These improvement rates with different parallel computing solutions may not appear ideal enough based on the number of used CPU cores or the number of used GPU threads. This is because there are some necessary nonparallel components and overhead communications in the coMCRF-based sequential simulation process. However, such improvements in computation speed, especially the speedup made by the GA-GNNS solution, are already very helpful to the application of the coMCRF model to land cover post-classification.