Explore chapters and articles related to this topic
Bioinformatics and Applications in Biotechnology
Published in Ram Chandra, R.C. Sobti, Microbes for Sustainable Development and Bioremediation, 2019
With enormous amount of data being thrown up by powerful experimental and sequencing techniques, the enabling technologies to analyze them call for very-high-end computational capabilities. A computer cluster is assembled to work as one machine harnessing the power of each computer synergistically. High speed networks coupled with software for distributed computing have made it possible to link a large number of computers to work as one machine. As of November 2016, the Chinese Sunway TaihuLight is the world’s most powerful supercomputer reaching 93.015 petaFLOPS (105 floating point operations per second). It consists of 40,960 processors with each processor containing 256 processing cores for a total of about 10 million CPU cores across the entire system (Fu et al., 2016). The Blue Gene high-performance computing system was developed by IBM in collaboration with Department of Energy’s Lawrence Livermore National Laboratory in California. It was built to specifically observe the process of protein folding and gene development. Blue Gene/L uses 131,000 processors for 280 trillion operations every second. The power of the system can be gauged from the fact that, on a calculator, one has to work nonstop for 177,000 years to perform the operations that Blue Gene cando in 1 s (http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/bluegene/).
Introduction
Published in Jerry J. Battista, Introduction to Megavoltage X-Ray Dose Computation Algorithms, 2019
The late Hans Meuer of the University of Mannheim developed a standardized way of assessing the speed of supercomputers more specifically for scientific and engineering applications. On an annual basis (www.top500.org), high performance computers are ranked in terms of their FLOP rate (FLOP/s – floating point operations per second) while solving a system of linear equations (i.e. Linpack benchmark, Rmax parameter). The term floating point refers to arithmetic operations on real numbers with fractional values. In the top-500 competition, these numbers are computed with double-precision with approximately 15 significant digits – ample accuracy for radiotherapy applications! Supercomputers broke through the PFLOP/s barrier in 2008 where P denotes peta or 1015. The fastest machine as of November 2017 is the Sunway TaihuLight installed at the National Supercomputing centre in Wuxi, China; it boasts a benchmark performance of almost 100 PFLOP/s. However, supercomputers costs several hundreds of million dollars (USD) and consume significant electrical power. More down-to-earth examples highlight gains in computational power of affordable consumer products (https://pages.experts-exchange.com/processing-power-compared/). Early Nintendo game consoles (NES, circa 1983) had a clocking speed comparable to the Apollo guidance computer that landed humans on the moon for the first time (1969). An Apple IPhone 4 (2010) has the calculation rate of a Cray-2 supercomputer of 1985, i.e. 1.6 GFLOP/s where G denotes giga or 109. An Apple watch doubles this rate to 3 GFLOP/s. An updated Nintendo Wii console yields 12 GFLOP/s, matched by a Sony Smartwatch 3. We will further explore the specific consequences of such advances on dose computation in Chapter 7.
Complex Systems
Published in Pier Luigi Gentili, Untangling Complex Systems, 2018
These problems are intractable when N is large. Examples are the Schrödinger equation for which f(N)=O(2N) holds (Bolotin 2014), and the TSP for which f(N)=O(N!)≈O((N/e)N). It suffices to make a few calculations to understand why exponential problems cannot be solved accurately and in a reasonable time, even if we have the best supercomputers in the world at our disposal. Let us consider the Schrödinger equation. For ten interacting particles, the maximum number of computational steps needed to determine the energy of the system is 210=1024; if N = 20, the n° of steps is 220≈1×106. According to the TOP500 list,2 updated in November 2017, the fastest supercomputer in the world is the Chinese Sunway TaihuLight that reaches the astonishing computational rate of 93 PFlop/s.3 With TaihuLight at our disposal, we need just ten femtoseconds and ≈10 picoseconds to solve the Schrödinger equation for a system with 10 and 20 particles, respectively. But, if our system consists of 500 particles, the number of computational steps becomes so huge, 2500≈3.3×10150, even TaihuLight would require an unreasonable amount of time to find the exact solution: ≈1×10126years. This amount of time is much, much longer than the age of the Universe, which has been estimated to be 14×109 years.
HMMN: a cost-effective derivative of midimew-connected mesh network
Published in International Journal of Computers and Applications, 2021
The demand for computation power is witnessing a manifold increase on a day-to-day basis, providing hope for solving grand challenge problems. It has now become almost certain that a solution for a grand challenge problem can be expected within a reasonable time, meaning the time taken to arrive at a solution will be as short as possible and perhaps not weeks or a months. Grand challenge problems and finding solutions to such problems include the following areas: scientific research such as the origins of matter and the universe; scientific discovery such as atomic-level simulations of new materials and analysis of the supernova in astronomy; energy research such as development of sources of energy and the complexities of a thermonuclear warhead explosion; improved health care such as the use of artificial intelligence and machine learning for better understanding of human diseases and the development of new medicines; enhancing national security by developing strategies for disaster prevention and mitigation and nuclear weapons safety and security; and many more [1]. Oak Ridge National Laboratory launched the fastest computer which is capable of performing exascale computations [2], and recently it superseded the Sunway Taihulight [3]. However, whenever a supercomputer or a massively parallel computer (MPC) is invented, scientists and engineers yearn for more and would want to have a more powerful computer in the near future. To construct an MPC capable of going beyond exascale computing, we have to develop an MPC system consisting of millions or tens of millions of nodes [4].