Explore chapters and articles related to this topic
Interprocess Communication Primitives in POSIX/Linux
Published in Ivan Cibrario Bertolotti, Gabriele Manduchi, Real-Time Embedded Systems, 2017
Ivan Cibrario Bertolotti, Gabriele Manduchi
If we compare this with the much richer pthreads API, we might be surprised from the fact that there is no way to define a specific program to be executed and to pass any arguments to it. What fork() actually does is just to create an exact clone of the calling process by replicating the memory content of the process and the associated structures, including the current value of the processor registers. When forks() returns, two identical processes at the same point of execution are present in the system (one of the duplicated processor registers is in fact the Program Counter that holds the address of the next instruction in the program to be executed). There is only one difference between the two: the return value of routine fork() is set to 0 in the created process, and to the identifier of the new process in the original process. This allows discriminating in the code between the calling and the created process, as shown by the following code snippet: #include <sys/types.h> #include <unstd.h> //Required include files ... pid_t pid; ... pid = fork(); if(pid == 0) { //Actions for the created process } else { //Actions for the calling process }
Multithreading in LabVIEW
Published in Rick Bitter, Taqi Mohiuddin, Matt Nawrocki, LabVIEW™ Advanced Programming Techniques, 2017
Rick Bitter, Taqi Mohiuddin, Matt Nawrocki
Priority and scheduling are different for Pthreads; Pthreads have defined scheduling policies: round robin; first-in, first-out; and others. The FIFO policy lets a thread execute until it completes its execution or becomes blocked. This policy is multitasking by any other name, because there is no preemption involved. The round-robin policy is preemptive multithreading. Each thread is allowed to execute for a maximum amount of time, a unit referred to as a “quantum.” The time of a quantum is defined by the vendor’s implementation. The “other” policy has no formal definition in the POSIX standard. This is an option left up to individual vendors. Pthreads expand on a concept used in UNIX called “forking.” A UNIX process may duplicate itself using a fork command. Many UNIX daemons such as Telnet use forking. Forking is not available to the Win32 programmer. A process that generates the fork is called the Parent process, while the process that is created as a result of the fork command is referred to as the Child process. The Child process is used to handle a specific task, and the Parent process typically does nothing but wait for another job request to arrive. This type of multitasking has been used for years in UNIX systems.
Blockchain in food supply chains: a literature review and synthesis analysis of platforms, benefits and challenges
Published in International Journal of Production Research, 2023
Kunpeng Li, Jun-Yeon Lee, Amir Gharehgozli
Blockchain technology is still in the early stages and so is the study of it. The current research is primarily conducted through synthesis analysis, literature review, case study, and survey research. A promising research direction would be to incorporate blockchain in analytical models. Benefits of blockchain were qualitatively introduced and discussed in this paper. An interesting extension would be to collect and evaluate quantitative evidence of the realised blockchain benefits from pioneer adopters. Blockchain-based platforms in food supply chains are mainly in the process of food production and distribution, but not in food delivery or food leftover sharing even though there are apparent benefits in these areas. Studies on how to integrate blockchain throughout the whole process in food supply chains from farm to fork would be another important research direction.
Generative design of conformal cubic periodic cellular structures using a surrogate model-based optimisation scheme
Published in International Journal of Production Research, 2022
The comparison of the three problems again reveals three different optimisation outcomes for lighter parts: The optimised lighter part achieves similar performance in comparison to the solid counterpart. The optimised lighter bracket has an MCS value of 360.5989 psi (2.4862 MPa), which is almost the same MCS value (358.5230 psi/2.471929069 MPa) of its solid counterpart. Meanwhile, the part weight is effectively reduced by 30.99%.The optimised lighter part does not perform as well as the solid counterpart. By restricting the volume fraction in a lower range, the non-uniformly offset connecting rod can no longer maintain a similar performance compared to its solid counterpart. The connecting rod attains an MCS value of 250.5975 psi (1.7278 MPa) while achieving a higher weight reduction of 19.46% (compared to 10.38%). However, the lighter weight comes at the cost of degraded performance.The optimised lighter part gains significant weight reduction with further improvement in its functional performance. By repeating the optimisation process, the optimised lighter fork end achieves not only significant weight reduction (38.85%) but also further performance improvement in the meantime. The MCS is further optimised to a value of 280.4817 psi (1.9338 MPa).
Parallel computing in railway research
Published in International Journal of Rail Transportation, 2020
Qing Wu, Maksym Spiryagin, Colin Cole, Tim McSweeney
OpenMP is a programming extension that is designed primarily for parallel computing with the shared memory model as shown in Figure 3(b). Compared with the distributed model, the shared memory model used by OpenMP makes it more flexible when initialising and finalising parallel processes. A good illustration of the flexibility is shown in Figure 6 where, for example, one computing process contains a number of parallelisable matrix operations and parallelisable for-loops. When using OpenMP, as the memory is shared among all computing units, the computing program can easily fork into multiple parallelised processes or threads and then operate on the shared memory (shared data). Upon finalisation of the parallelised processes, all forked processes can then easily rejoin the master process. For railway research, OpenMP has found applications in [11,19,32,44,45]. More information regarding OpenMP itself can be found in Barney’s other online book Barney [46].