Explore chapters and articles related to this topic
Parallel Programming Using Message Passing Interface (MPI)
Published in Subrata Ray, Fortran 2018 with Parallel Programming, 2019
MPI is the abbreviation for message passing interface. It consists of a library of Fortran subroutines (also C functions) that the programmer calls explicitly to write parallel programs. A parallel computation, using MPI routines, is performed by a number of processors, each having its own local memory to execute the task assigned to it and communicate with each other through messages. A process cannot directly access the data of another process, and sharing of data is achieved only through messages. Usually processes run in different processors but may not always be running in different processors. The processors may also be heterogeneous. The source code is usually portable across the processors. The number of processors may be prescribed during the execution of the program. The basic principle of parallelizing is to divide the job among the processors so that a proper load balancing is maintained for efficient running of the program.
Introducing NovaGenesis as a Novel Distributed System-Based Convergent Information Architecture
Published in Phan Cong Vinh, Nature-Inspired Networking: Theory and Applications, 2018
Antonio Marcos Alberti, Marco Aurelio Favoreto Casaroli, Rodrigo da Rosa Righi, Dhananjay Singh
The MPI is the standard API for developing high performance applications for cluster and grid computing. MPI was first introduced in 1994 by the MPI Forum2 as a reaction against the wide variety of proprietary approaches that were in use for message passing in this field. Considering that a lot of applications written with this standard for cluster and grids already exist, MPI is also used for submitting message passing applications on cloud environments. Two popular implementations of MPI standard are OpenMPI [46] and MPICH [47]. Both provide support for TCP/IP sockets, as well as transport over other high speed networks, such as Infiniband [48], Ethernet [49], iWARP [50], and Myrinet [51]. In general, it is possible to bypass the TCP/IP stack or even the operating system’s kernel overhead.
Oop, Mpi and Parallel Computing
Published in James J Y Hsu, Nanocomputing, 2017
The Message Passing Interface (MPI) is the de facto standard for implementing programs on multiple processors to run in the distributed memory environment. If all CPUs share the full memory, it is in fact easier to implement a shared-memory model with use of OpenMP since the communication bottleneck is alleviated. The MPI is popularly defined in C and Fortran languages for communication among different CPUs. As pc clusters are making substantial inroads into the scientific computing arena, MPI will prove to be well worth the effort since PC clusters are rather cost-effective. The industry trend towards multiple-cores per CPU may also make the OpenMP indispensable. Hybrid architectures by mixing these two are getting more attention lately.
Parallel Multiphysics Coupling: Algorithmic and Computational Performances
Published in International Journal of Computational Fluid Dynamics, 2020
G. Houzeaux, M. Garcia-Gasulla, J. C. Cajas, R. Borrell, A. Santiago, C. Moulinec, M. Vázquez
In the following, for the sake of simplicity, we neglect the time to exchange the transmission conditions. The basic questions are: how should the different MPI processes and be distributed to optimise the computing time or, equivalently, how to maximise the efficiency of the run? To answer this question, we will first develop some simple performance indicators of the block coupling algorithms: timing, parallel efficiency and scalability. The answer to the question will be given in Section 4.4.
Programming models and systems for Big Data analysis
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Loris Belcastro, Fabrizio Marozzo, Domenico Talia
MPI is a general-purpose distributed memory system for parallel programming, which is commonly used for developing iterative parallel applications where nodes require data exchange and synchronisation to proceed. A generic MPI program can be written by using APIs available for many programming languages (e.g. Java, Fortran, C, C++, Perl, Python).