Explore chapters and articles related to this topic
Distributed and Parallel Computing
Published in Sunilkumar Manvi, Gopal K. Shyam, Cloud Computing, 2021
Sunilkumar Manvi, Gopal K. Shyam
Parallel computing is closely related to concurrent computing: they are frequently used together, and often confused, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multi-tasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down in several, often many, very similar subtasks that can be processed independently and whose results are combined afterward, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution. Figure 2.6 shows a parallel system in which each processor has a direct access to a shared memory.
Introduction to Artificial Intelligence and Soft Computing
Published in Konar Amit, Artificial Intelligence and Soft Computing, 2018
During the developmental phase of AI, machines used for conventional programming were also used for AI programming. However, since AI programs deal more with relational operators than number crunching, the need for special architectures for the execution of AI programs was felt. Gradually, it was discovered that due to non-determinism in the AI problems, it supports a high degree of concurrent computing. The architecture of an AI machine thus should allow symbolic computation in a concurrent environment. Further, for minimizing possible corruption of program resources (say variables or procedures), concurrent computation may be realized in a fine grain distributed environment. Currently PROLOG and LISP machines are active areas of AI research, where the emphasis is to incorporate the above issues at the hardware and software levels. Most of these architectures are designed for research laboratories and are not available in the open commercial market to date. We hope for a better future for AI, when these special architectures will find extensive commercial exploitation.
Parallel Computing
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Biswajit Jena, Pulkit Thakar, Gopal Krishna Nayak, Sanjay Saxena
Clearly “Parallel Computing” comprises of two words: Parallel and Computing [1]. The term parallel here refers to simultaneously, and the term computing means executing a program. This can give us the idea of what parallel computing means. Parallel computing is now one among those fields that is being researched very vigorously and enthusiastically. There is a fine border between parallel computing and concurrent computing. Concurrent computing refers to “in progress at the same time.” And parallel computing refers to simultaneous working of programs. Here the difference is that concurrent computing is usually done for uniprocessor machines. And parallel computing is used by multi-processor machines, hence letting many processes to be run simultaneously.
Sleptsov Net Computing resolves problems of modern supercomputing revealed by Jack Dongarra in his Turing Award talk in November 2022
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
The paper also gives an impartial historical view of the parallel software schemata development. Firstly, parallel processes schemata appeared in the early works of Frank and Lilian Gilbreth dated 1921 and were standardized in 1947. In 1958, Gill started using bipartite-directed graphs to specify parallel computations. In 1962, Petri further develops the model of place-transitions nets adding tokens and transition firing rule. Agerwala and Hack extend the model with inhibitor arcs in 70-ties. Salwicki and Sleptsov in 80-ties generated ideas of the maximal parallel and the multiple transition firing, further developed and published in the works of Burkhard and Zaitsev. Turing complete place-transition nets represent a perfect graphical language for concurrent computing, though they run exponentially slower compared to a Turing machine. Finally, Sleptsov net mends this flaw running fast and opening prospects for hardware implementation of a homogenous massive parallel supercomputer. The prospective direction is implemented in prototypes awaiting investments for its full-scale implementation. Let us apply at least 10% of wasted 99.2% of investment into modern USA supercomputers (the number taken from Jack Dongarra Turing Award Talk) to SNC implementation project to obtain a new record of real-life efficiency of computations.
(Smart CPS) Integrated application in intelligent production and logistics management: technical architectures concepts and business model analyses for the customised facial masks manufacturing
Published in International Journal of Computer Integrated Manufacturing, 2019
Chang Liu, Yunzhu Zhou, Yutong Cen, Dongtao Lin
Jazdi (2014) claimed that under the Fourth Industrial Revolution in Cyber-Physical Systems (CPS), there will be new business models, work processes and development methods, which is unthinkable at that time. Navickas, Kuznetsova, and Gruzauskas (2017) believed that the development of the concept of Internet of things (IoT) and big data has improved the productivity of various businesses and affected the outlook of new business models. Lee, Bagheri, and Kao (2015) claimed that CPS information is closely monitored between physical factories and the cyber computing space from all related angles, and they proposed a unified, five-level architecture as a guideline for the implementation of CPS. Furthermore, networked machines will be able to perform more effectively, collaboratively and resiliently, through the use of advanced information analytics. Mosterman and Zander (2016) presented a system that includes a physical environment, a wireless network, concurrent computing resources and computing functions, such as service arbitration, various forms of control, and processing of streaming video. Wan et al. (2016) introduced mobile services and cloud computing technology into an intelligent manufacturing environment based on the CPS concept. They designed a customisation manufacturing system for individual demands and flexible production mechanisms. Pacaux-Lemoine et al. (2017) presented an intelligent manufacturing system based on artificial self-organizing systems (ASO).
MarineMAS: A multi-agent framework to aid design, modelling, and evaluation of autonomous shipping systems
Published in Journal of International Maritime Safety, Environmental Affairs, and Shipping, 2019
Zhe Xiao, Xiuju Fu, Liye Zhang, Wanbing Zhang, Manu Agarwal, Rick Siow Mong Goh
Besides, concurrent computing naturally fits with MAS for modelling multiple autonomous agents’ behaviour and their interactions. Nowadays, most mainstream servers are multi-core multi-threaded, tasks are considered to run in concurrent mode to maximize the system capacity. In the meantime, some emerging technologies have been proposed to make efficient utilization of computing resources of multi-core servers and to achieve higher throughput, less overhead and latency for highly concurrent system implementation. For instance, akka is an actor model-based framework to facilitate the design and development of highly concurrent applications. Compared with thread-based concurrency, “akka” has much lightweight resource unit called “actor” and well-designed message communication between actors. As thus, individual agent can be attached with a launched actor for the corresponding processing tasks and operations. Communication between different agents (station–vessel and vessel–vessel) is implemented through sending/receiving the messages, while method call and direct access is applied for functional integral entities between vessel, radar, and navigator as well as between shore station agent, predictor, and planner.