Explore chapters and articles related to this topic
Parallel and Distributed Processing
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
Albert I. Reuther, Hahn G. Kim
The Parallel Virtual Machine (PVM) is an outgrowth of a research project to design a solution for heterogeneous parallel computing (Geist et al. 1994; Parallel Virtual Machine webpage). The PVM library and software tools present the programmer with an abstract “parallel virtual machine,” which represents a cluster of homogeneous or heterogeneous computers connected by a network as a single parallel computer. Machines can enter or leave the virtual machine at runtime. PVM uses a leader-worker paradigm; programs start as a single-leader process, which dynamically spawns worker processes onto multiple processors, all executing the same program. All processes are assigned a unique identification (ID), parallel computation with MPI. known as a task ID, used for communicating.PVM supplies functions for message passing: adding and removing processors to and from the virtual machine; managing and monitoring processes executing on remote machines.
High-Performance Computing, High-Speed Networks, and Configurable Computing Environments: Progress Toward Fully Distributed Computing
Published in Theo C. Pilkington, Bruce Loftis, Joe F. Thompson, Savio L-Y. Woo, Thomas C. Palmer, Thomas F. Budinger, High-Performance Computing in Biomedical Research, 2020
William E. Johnston, Van L. Jacobson, Stewart C. Loken, David W. Robertson, Brian L. Tierney
PVM is a publicly available software package that allows the utilization of a heterogeneous network of parallel and serial computers as a single computational resource. It provides facilities for initiation, communication, and synchronization of processes over a network of heterogeneous machines.
P-HS-SFM: a parallel harmony search algorithm for the reproduction of experimental data in the continuous microscopic crowd dynamic models
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2018
Khalid Mohammad Jaber, Osama Moh’d Alia, Mohammed Mahmod Shuaib
The architecture of a parallel environment is required to connect all the nodes involved, for example through the use of shared or distributed memory. Communication between processors forms the basis of parallel computation. Fortunately, many message-passing libraries have been developed that can initiate and configure the messaging environment as well as send and receive packets of data between processors (Leopold, 2001). The two most popular message-passing libraries are the Parallel Virtual Machine (PVM) and Message Passing Interface (MPI), while the most popular execution models in terms of shared address space paradigms are POSIX Thread and OpenMP (Leopold, 2001).