Explore chapters and articles related to this topic
Distributed and Parallel Computing
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
Task parallelisms is the characteristic of a parallel program that “entirely different calculations can be performed on either the same or different sets of data.” This contrasts with data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for execution. The processors would then execute these sub-tasks simultaneously and often cooperatively. The task parallelism involves the parallelism of various tasks which enable communication between it. Figure 2.8 shows the task level parallelism, which is organizing a program or computing solution into a set of processes/ tasks/ threads for simultaneous execution. Various nodes are involved in the network and the result of the task is obtained by exchanging messages.
High-performance attribute reduction on graphics processing unit
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2020
Algorithm parallelisation is always used to accelerate the computational process. Generally, there are two different ways to design parallel algorithms, which are task parallelism and data parallelism. Task parallelism aims at decomposing a task into several separated sub-tasks and running them in parallel. Susmaga (2004) decomposed the tasks of attribute reduction/construction into some sub-tasks and proposed a constrained tree-like manner for parallel execution. However, it is only appropriate for small data set. Data parallelism is much more effective than task parallelism when a data set is large. For example, Deng, Yan, and Wang (2010) cut a decision table into many blocks and proposed an approach for parallel reduction from a series of decision sub-tables. Liang, Wang, Dang, and Qian (2012) regarded a sub-table of a data set as a small granularity. The reducts of each sub-table can be separated, estimated, and then finally be combined together to generate the entire reduct. However, reduct is approximate and can not be guaranteed to be the same with that calculated by the serial algorithm.