Explore chapters and articles related to this topic
Applications of Parallel Processing in Structural Engineering
Published in Hojjat Adeli, Parallel Processing in Computational Mechanics, 2020
Advanced computer architectures are centered around the concept of parallel processing. The basic idea behind parallel processing is that programs using parallel processors should run much faster than otherwise identical programs using only one processor. In the past decade many new parallel computers have emerged in research institutes as well as on the commercial market. Scientists and engineers are now facing the great challenge to meet the ever-increasing capability of the parallel computers, hardware, and software. They can now solve larger and more complex problems than hitherto. However, the parallel processing environment also introduces increased complexity in software design and algorithmic strategy. The development of parallel algorithms requires decomposing the solution of a problem into concurrently executable tasks (or processes), balancing computational loads in processors, arranging the task execution schedule, synchronizing operations, organizing data access and movement, planning communication among processors, and so on. It is believed that the main stumbling block to the use of parallel computers is the difficulty of formulating algorithms for them.
Genetic Algorithm (GA)
Published in Paresh Chra Deka, A Primer on Machine Learning Applications in Civil Engineering, 2019
The main criteria used to classify optimization algorithms are as follows: continuous/discrete, constrained/unconstrained, and sequential/parallel. There is a clear difference between discrete and continuous problems. Therefore, it is instructive to notice that continuous methods are sometimes used to solve inherently discrete problems and vice versa. Parallel algorithms are usually used to speed up processing. There are, however, some cases in which it is more efficient to run several processors in parallel rather than sequentially. These cases include, among others, those in which there is high probability of each individual search run getting stuck in a local extreme.
High-performance attribute reduction on graphics processing unit
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
Algorithm parallelisation is always used to accelerate the computational process. Generally, there are two different ways to design parallel algorithms, which are task parallelism and data parallelism. Task parallelism aims at decomposing a task into several separated sub-tasks and running them in parallel. Susmaga (2004) decomposed the tasks of attribute reduction/construction into some sub-tasks and proposed a constrained tree-like manner for parallel execution. However, it is only appropriate for small data set. Data parallelism is much more effective than task parallelism when a data set is large. For example, Deng, Yan, and Wang (2010) cut a decision table into many blocks and proposed an approach for parallel reduction from a series of decision sub-tables. Liang, Wang, Dang, and Qian (2012) regarded a sub-table of a data set as a small granularity. The reducts of each sub-table can be separated, estimated, and then finally be combined together to generate the entire reduct. However, reduct is approximate and can not be guaranteed to be the same with that calculated by the serial algorithm.
The internet of things for smart manufacturing: A review
Published in IISE Transactions, 2019
Hui Yang, Soundar Kumara, Satish T.S. Bukkapatnam, Fugee Tsung
Because serial algorithms often lead to prohibitive computation time in large-scale IoMT, there is an urgent need to scale up the algorithm and use large-scale machine learning in cloud computing to complete the optimization task collaboratively. Parallel algorithms pipeline the overall computing task into multiple computers (or processors) for collaborative processing. As shown in Figure 10, each computer carries out part of the computation and works simultaneously with other computers to combine results to build virtual machine networks, reducing the computing time significantly. Nowadays, the availability of multi-core CPUs, cell processors (e.g., GPUs), and cloud computing make parallel algorithms easily implementable with off-the-shelf strategies such as multi-threading and single-instruction-multiple-data.