Explore chapters and articles related to this topic
Distributed and Parallel Computing
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
Parallel computing is a type of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling, a technique in computer architecture whereby the frequency of a microprocessor can be automatically adjusted “on the fly” depending on the actual needs, to conserve power and reduce the amount of heat generated by the chip. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Overview of Basic Numerical Methods and Parallel Computing
Published in Sourav Banerjee, Cara A.C. Leckey, Computational Nondestructive Evaluation Handbook, 2020
Sourav Banerjee, Cara A.C. Leckey
Parallel computing is essentially a type of computation in which multiple calculations are performed at the same time. This operation is performed on the basis that a big sequential or serial problem can be split up into smaller parallel problems such that the problem can be solved simultaneously in a more efficient manner [6]. Development of computers in past few decades has helped advance the technological frontier, scientific breakthroughs in many ways and helped humans to simulate the unknown natural phenomena, which were not in the grasp of humans even a hundred years back by leaping the computing capabilities and enhancing the computational bounds [7]. Even more so, the development of parallel computing has exponentially increased the speed with which the information can be processed. This made possible to tackle and solve previously unsolvable complex scientific research problems in various field such as science, engineering, and information technology. Parallel computing has been able to turn a new leaf with the appearance of hardware with multicore designs [8]. The use of parallel hardware has been global as of current time. All the newly developed laptops, desktops, and even servers use multicore processors. And these new platforms require development of software in a new manner; one that can fully exercise the benefits of multiple cores.
Trauma Outcome Prediction in the Era of Big Data: From Data Collection to Analytics
Published in Ervin Sejdić, Tiago H. Falk, Signal Processing and Machine Learning for Biomedical Big Data, 2018
Shiming Yang, Peter F. Hu, Colin F. Mackenzie
Many programming languages and scientific data analysis libraries support parallel computing, such as OpenMP (Open Multi-Processing), MPI (Message Passing Interface) for CPU parallelism, and Compute Unified Device Architecture (CUDA) for graphics processing unit (GPU) parallelism. Moreover, nowadays, cloud computing services provide scalable on-demand use of shared computing resources, including CPUs/GPUs, memory, storage, and security. The end users from hospital or medical research institutes can be freed from the burden of building and maintaining expensive high-performance data processing equipment and infrastructures. Giant vendors, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), are finding more and more applications in medical data mining and machine learning.
Integration of ridesharing and activity travel pattern generation
Published in Transportmetrica A: Transport Science, 2023
Ali Najmi, Travis Waller, Wei Liu, Taha H. Rashidi
We did not include the computation time of different variants in Table 2. The reason is that DAM includes extensive pre-processing computational time, adversely affecting its computation time. Including the computation time in the table can distract the reader from the message of this paper which is introducing a prototype for an advanced rideshare system covering the ATP-based announcements to enhance participants’ mobility. However, computation time is not a problem in practice when advanced infrastructure and computation facilities are available. For example, parallel computing and quantum computing offer promising solutions to these challenges. Parallel computing can break down a problem into smaller, more manageable parts that can be processed simultaneously, greatly reducing the time required for calculations. Similarly, quantum computing uses the principles of quantum mechanics to perform calculations that are impossible on classical computers, offering the potential for exponential speedups in certain types of problems. By leveraging these advanced computing methods, it is possible to address the complexity of computational models and enable more efficient and accurate data analysis.
Development and computational performance improvement of the wheel-rail coupling for heavy haul locomotive traction studies
Published in Vehicle System Dynamics, 2022
Maksym Spiryagin, Edwin Vollebregt, Mark Hayman, Ingemar Persson, Qing Wu, Chris Bosomworth, Colin Cole
Modern railway research advancements have resulted in more and more advanced simulation models which are also computationally expensive. For these cases, the practicalities of the research outcomes also depend on the computing time that is needed to execute the simulations. Parallel computing [53] is a good approach to increase the computing speeds of simulation models. The concept of parallel computing is very simple, which uses multiple computer cores to process multiple computing tasks simultaneously. Obviously, the primary objective of parallel computing is to save computing time. A number of techniques, such as Message Passing Interface [54] and OpenMP [55] can be used to facilitate parallel computing and depend on different algorithm structures and programming platform. This paper further improves the approach published in [56] which used the OpenMP technique and was designated to parallelise the computing of wheel-rail contact simulations.
A parallel equivalence algorithm based on MPI for GNSS data processing
Published in Journal of Spatial Science, 2021
Chunhua Jiang, Tianhe Xu, Yujun Du, Zhangzhen Sun, Guochang Xu
In order to make better use of the equivalence algorithm, it is suggested to combine it with parallel computing technology. Parallel computing is a type of computation in which many calculations or the execution of processes can be carried out simultaneously (Almasi and Gottlieb 1989). Usually, large problems can be divided into smaller ones and then solved simultaneously. MPI is a parallel programming technique based on information transfer (Forum 1994). In order to facilitate calculation, intel Maths Kernel Library (MKL) is used in the program. The prerequisite for program parallelisation is that the results are equivalent and then the algorithms can be paralleled. The advantage of the equivalence theory is that the nuisance parameters are eliminated, while L and P remain unchanged. The reduced equivalent observation equations contain only the concerned parameters, which ca 7 be solved independently without considering the correlation problem. On the one hand, the new observation equation is equivalent to the original equation theoretically, which makes the decomposition of the solution tasks and the allocation of computational resources possible. On the other hand, the consistency of the equivalent observation equation with the original observation equation can simplify the parallel scheme and reduce the complexity of data transmission and message transmission. The numerical precision of the original solution and that of the equivalent equation solution with MPI parallel technology are analysed in the following part.