Explore chapters and articles related to this topic
2
Published in Carlos Alberto Vélez Quintero, Optimization of Urban Wastewater Systems using Model Based Design and Control, 2020
Following the development of multi-core processors and the ability to connect computers together in clusters or grids, parallel computing techniques have emerged as an alternative to speed up the computation of optimization problems (Martins, et al. 2001). However, parallelism leads to the need for new algorithms, specifically designed to run simultaneously on different processors, together with the parallel computer resources suitable to run the parallelized algorithms. Parallel computing has traditionally been done on expensive mainframe computers that require skilled support personnel. More recently, clusters of standards computers connected by Ethernet have become widely used (Beowulf clusters), but a dedicated cluster requires significant time and effort to construct and maintain. A variation of dedicated clusters is the use of Networks of Workstations (NOW) that operate part time as a cluster. A NOW cluster depends on the availability of idle workstations, and the speedup of the parallelization can be reduced as a consequence of the migration of jobs when a machine is no longer available.
Parallel and high-performance systems
Published in Joseph D. Dumas, Computer Architecture, 2016
Multicomputer systems date back to the early to mid-1980s. The Caltech Cosmic Cube is generally acknowledged to have been the first multicomputer, and it significantly influenced many of the others that came after it. Other historical examples of multicomputers include Mississippi State University’s Mapped Array Differential Equation Machine (MADEM), nCUBE Company’s nCUBE, the Intel iPSC and Paragon, the Ametek 2010, and the Parsys SuperNode 1000. During the fifth generation, the Beowulf class of cluster computers (introduced by Donald Becker and Thomas Sterling in 1994) quickly became the most widely used type of multicomputer. Beowulf clusters are parallel machines constructed of commodity computers (often inexpensive PCs) and inexpensive network hardware; to further minimize cost, they typically run the open-source Linux operating system. Although each individual computer in such a cluster may exhibit only moderate performance, the low cost of each machine (and the scalability of the connection mechanism) means that a highly parallel, high-performance system can be constructed for much less than the cost of a conventional supercomputer.
Next-Generation Technologies to Enable Sensor Networksa
Published in Syed Ijlal Ali Shah, Mohammad Ilyas, Hussein T. Mouftah, Pervasive Communications Handbook, 2017
Joel I. Goodman, Albert I. Reuther, David R. Martinez
Simply put, RTOSs give priority to computational tasks. They usually do not offer as many operating system features (virtual memory, threaded processing, etc.) because of the interrupting processing nature of these features [22]. However, an RTOS can ensure that real-time critical tasks have guaranteed success in meeting streamed processing deadlines. An RTOS does not need to be run on typical embedded processors; it can also be deployed on Intel and AMD Pentium-class or Motorola G-series processor systems. This includes Beowulf clusters of standard desktop personal computers and commodity servers. This is an important benefit, providing a wide range of candidate heterogeneous computing resources.
Review on algorithms of dealing with depressions in grid DEM
Published in Annals of GIS, 2019
Yi-Jie Wang, Cheng-Zhi Qin, A-Xing Zhu
Parallel computing based on various parallel computing platforms (e.g. graphics processing unit (GPU), symmetrical multiprocessor (SMP), and Beowulf cluster) has been widely adopted to not only speed up digital terrain analysis algorithms (e.g. Tesfa et al. 2011; Qin and Zhan 2012; Qin et al. 2017) but also to make the algorithms applicable to massive DEMs which will overflow the limited memory of a personal computer. For exerting the power of parallel computing for a specific parallel computing platform, parallel algorithms should be designed and implemented based on the specific parallel programming model which is available to the platform (e.g. Wallis et al. 2009; Qin and Zhan 2012; Zhou, Sun, and Fu 2016). Examples of parallel programming models include the Open Multi-Processing (OpenMP), which is a widely used multithreading programming model used for SMP parallel computing devices (such as multiprocessors in standard personal computers), the compute unified device architecture (CUDA) for GPU, and the message passing interface (MPI) for distributed memory parallel machines (such as the Beowulf cluster) (Qin et al. 2014a).