Explore chapters and articles related to this topic
Introduction
Published in Randall L. Eubank, Ana Kupresanin, Statistical Computing in C++ and R, 2011
Randall L. Eubank, Ana Kupresanin
Parallel processing has often been viewed as an esoteric side of computing. However, now that multiple-core processors are the industry standard, the operating systems of desktop machines and laptops are all actively engaged in parallel processing. The problem for the typical code developer lies in obtaining access to the multithreading capabilities of the machines at their disposal. This resource becomes available through an application programming interface or API that contains a set of commands (usually referred to as bindings) that link a language such as C++ with a code library that manages communication to and between individual processors. OpenMP is a simple yet sophisticated API for shared memory settings that is amenable to use even in a desktop environment. The industry standard for parallel programming in distributed memory environments is the Message Passing Interface or MPI. The OpenMP and MPI interfaces are not competitive in that they are designed to solve very different communication problems. For MPI the task is sending messages between processors that occupy different physical locations where they do not have access to a common block of memory. In contrast, with OpenMP processors communicate directly through memory that they share; part of the function of the API in this instance is to control memory usage in a way that will prevent data corruption.
D
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
distributed memory denotes a multiprocessor system where main memory is distributed across the processors, as opposed to being equally accessible to all. Each processor has its own local main memory (positioned physically “close”), and access to the memory of other processors takes place through passing of messages over a bus. The term “loosely coupled” could also be used to describe this type of multi-processor architecture to contrast it from shared memory architectures, which could be denoted as “strongly coupled”.
Distributed and Parallel Computing
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable.
Development and Optimisation of a DNS Solver Using Open-source Library for High-performance Computing
Published in International Journal of Computational Fluid Dynamics, 2021
Hamid Hassan Khan, Syed Fahad Anwer, Nadeem Hasan, Sanjeev Sanghi
The application programing interface (API), namely OpenMP and MPI, can be used to parallelise the present solver on CPUs. Therefore, the appropriate API is selected based on the present solver and the discussion above. The current solver deals with array operation or loop-free operation, as discussed in the preceding Section 3.1. Therefore, the memory (RAM) usage of the processor increases for the present solver. The distributed memory system (parallelised using MPI) has the advantage of reducing the memory usage along with the computational time. Therefore, the present solver is parallelised by using the MPI. The MPI parallelisation is achieved with one of its inherent concept known as domain decomposition.