Explore chapters and articles related to this topic
The Ethernet Advantage in Networking
Published in James Aweya, Designing Switch/Routers, 2023
To some, Grid Computing is a general-purpose distributed computing model where heterogeneous systems on an Intranet or the Internet are pooled together to offer services (such as computational services, data services, or other types of service) that can be accessed and utilized by other systems participating in the Grid. In this model, the group of heterogeneous computers (forming the grid) donate spare cycles to accomplish a computing task. The Global Grid Forum (GGF) and Enterprise Grid Alliance view Grid computing from different perspectives:GGF: The Grid concept from the GGF is modeled on the idea that ubiquitous shared computing services can turn the entire Internet into a single large virtual computer. The GGF middleware is a standards-based initiative that promotes Internet-wide resource sharing and collaborative problem solving by multi-institutional “virtual organizations”.EGA: The EGA approaches Grid computing from the perspective of pooling the computing resources within the enterprise network for sharing and collaborative problem-solving. The EGA is more focused on business applications of the Grid rather than technical or scientific supercomputing applications. Particularly, interest is in static grids located within the confines of the enterprise data center, and the goal is to build as much as possible on the existing infrastructure of applications, servers, SANs, and networks in the enterprise.The USA NSF-funded TeraGrid project is another Grid computing initiative that allows for the harnessing and use of distributed computers, data storage systems, networks, and other resources, as if they were a single massive system, regardless of physical location, thereby, creating “virtual supercomputers”.
A geospatial hybrid cloud platform based on multi-sourced computing and model resources for geosciences
Published in International Journal of Digital Earth, 2018
Qunying Huang, Jing Li, Zhenlong Li
Traditionally, high-performance computing clusters or large-scale grid computing infrastructures, such as TeraGrid (Beckman 2005) and Open Science Grid (Pordes et al. 2007), have been widely used to support geoscience applications and models with the computational challenges (Bernholdt et al. 2005; Fernández-Quiruelas et al. 2011; Yang, Wu, et al. 2011). However, these computing infrastructures require dedicated hardware and software with large investment and long-term maintenance, and take a long time to build. Alternatively, geoscientists can run applications (e.g. weather forecasting models; (Massey et al. 2015)) in a loosely coupled computing resource pool contributed by citizens, known as citizen computing paradigm (Anderson and Fedak 2006). While such an infrastructure provides a potential solution for the high-throughput tasks, its computing resources are not reliable as citizens may terminate the assigned tasks at any time. Additionally, it is also challenging to collect Big Data (e.g. global climate simulation) from the citizens, due to the limited bandwidth resources between the public clients and the centralized servers responsible for storing and managing the model output (Li et al. 2017). Therefore, volunteer computing is not suitable for time-critical tasks that should be completed within a tight period.