Explore chapters and articles related to this topic
Large-Scale Finite Element Analysis of the Beating Heart
Published in Theo C. Pilkington, Bruce Loftis, Joe F. Thompson, Savio L-Y. Woo, Thomas C. Palmer, Thomas F. Budinger, High-Performance Computing in Biomedical Research, 2020
Andrew McCulloch, Julius Guccione, Lewis Waldman, Jack Rogers
There are two main forms of parallel processing systems. Multicomputer (or “distributed”) systems consist of heterogeneous processors, typically physical computer nodes with their own local memory, linked by a network. We will concentrate on “multiprocessor” systems, which are faster and more specialized because they consist of more tightly coupled and homogeneous processors whose memory may be local to the processor element, shared between elements, or a combination of each. Multiprocessor architectures may be classified according to their processor and memory organization or by the degree of parallelism’ of the instruction and data streams. From the latter approach, Flynn18 suggested the following four categories for computer architectures: single-instruction single-data (SISD), single-instruction multiple-data (SIMD), multiple-instruction single-data (MISD), and multiple-instruction multiple-data (MIMD).
Parallel Computing
Published in Udo W. Pooch, Alan D. George, Lois Wright Hawkes, Microprocessor-Based Parallel Architecture for Reliable Digital Signal Processing Systems, 2018
Alan D. George, Lois Wright Hawkes
The need for parallel processing in future computing systems is becoming increasingly more evident. Non-parallel or sequential computers are quickly approaching the upper limit of their computational potential. This potential performance envelope is dictated by the speed of light, which restricts the maximum signal transmission speed in silicon to 3 × 107 meters per second. Thus, a chip which is 3 centimeters in diameter requires at least 10-9 seconds to propagate a signal, thereby restricting such a chip to at most 109floating-point operations per second (i.e. 1 GFLOPS). Since existing supercomputer processors are quickly approaching this limit, the future of sequential processors appears limited, as illustrated in Figure 3.1. And while other chip technologies such as gallium arsenide (GaAs) provide lower signal propagation times than silicon, they only represent a minor delay of the inevitable. Thus, sequential processors are quickly approaching their upper bound, making parallel and distributed computing the wave of the future [DECE89]. In its most general form, parallel processing or parallel computing may be thought of as an efficient form of information processing which emphasizes and exploits those events in the algorithm or computing process which are concurrent. The exploitation of these concurrent events may take place at many different processing levels [HWAN84]. The presence of multiple processors in the system is a necessary but not necessarily sufficient condition in order to be considered a parallel system.
Clustering in Big Data
Published in Kuan-Ching Li, Hai Jiang, Albert Y. Zomaya, Big Data Management and Processing, 2017
Min Chen, Simone A. Ludwig, Keqin Li
In this age of data explosion, parallel processing is essential to process a massive volume of data in a timely manner. Because the growth of data size is a lot faster than memory and processor advancements, single-machine clustering techniques with a single processor and a memory cannot handle the tremendous amount of data. Algorithms that can be run on multiple machines are needed. Unlike single-machine techniques, multiple-machine clustering techniques divide the huge amount of data into small pieces. These small pieces of data can be loaded on different machines and the huge problem can be solved using processing power of these machines. Parallel processing applications include conventional parallel applications and data-intensive applications. The conventional parallel applications assume that data can be fit into the memory of distributed machines. Data-intensive applications are I/O bound and devote the largest fraction of execution time to the movement of data. OpenMP, MPI [13], and MapReduce are common parallel processing models for computing data-intensive applications. Here, we only discuss the conventional parallel and MapReduce clustering algorithms.
Conditional strong matching preclusion of the pancake graph
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
Parallel processing uses computers made up of many separate processor to overcome the limitation of computers with a single processor. When parallel processing is used, one processor may need output generated by another processor. Therefore, these processors must be interconnected. The interconnection network of these processors is usually modelled by graphs. Brigham et al. [1] introduced the concept of matching preclusion as a measure of robustness in the event of link failure in interconnection networks. A matching preclusion set of G is a set of edges whose deletion results in an unmatchable graph [1]. The matching preclusion number of G, denoted by , is the minimum size of all possible matching preclusion sets of G. Any such optimal set is called an optimal matching preclusion set. If is large, the network will robust in the event of link failures. If G is unmatchable, then .
Experimental Evaluation of Agent-based Approaches to Solving Multi-mode Resource-Constrained Project Scheduling Problem
Published in Cybernetics and Systems, 2018
The idea of the A-Team was used to develop the software environment called JADE-based A-Team (JABAT), dedicated for solving different computationally hard optimization problems (Jędrzejowicz and Wierzbowska 2006, Barbucha et al. 2009). JABAT system supports the construction of the dedicated A-Team architectures. Agents used in JABAT assure decentralization of computation across multiple hardware platforms. Parallel processing leads to more effective use of the available resources and, ultimately, a reduction of the computation time. The JABAT environment has been successfully used for solving different NP-hard optimization problems including Euclidean planar traveling salesman problem (Jędrzejowicz and Wierzbowska 2011), vehicle routing problem (Barbucha et al. 2013), clustering problem (Czarnowski and Jędrzejowicz 2009), resource availability cost problem (Jędrzejowicz and Ratajczak-Ropel 2012), as well as the single and multi-mode resource-constrained project scheduling problems (RCPSP and MRCPSP) (Jędrzejowicz and Ratajczak-Ropel 2016a, 2016b). Some optimization agents for solving these problems as well as static and dynamic strategies controlling the interactions between agents and memories have been proposed and experimentally validated. The influence of such interaction strategy on the A-Team was investigated by Barbucha et al. (2010), Jędrzejowicz and Ratajczak-Ropel (2014).
Iterated local search for the vehicle routing problem with a private fleet and a common carrier
Published in Engineering Optimization, 2020
John F. Castaneda L., Eliana M. Toro, Ramon A. Gallego R.
Parallel processing could also be used to take advantage of the characteristics of the problem and the solution method. Parallel processing would allow a greater area of the solution space to be explored in less computing time in comparison with sequential processing, which translates into an improvement in performance, especially in instances of high mathematical complexity and with low computational times.