Explore chapters and articles related to this topic
High-Performance Computing and Its Requirements in Deep Learning
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Biswajit Jena, Gopal Krishna Nayak, Sanjay Saxena
We use supercomputers in the case of high-performance computing compared to general-purpose computers and exploit the features, properties, and characteristics of supercomputers. The performance of general-purpose computers is measured in MIPS, which stands for million instructions per second. With supercomputers, performance is instead measured in FLOPS, which stands for floating-point operations per second. A supercomputer is defined as one that can far outperform general-purpose computers in terms of speed, reliability, efficiency, and problem-solving capacity. There are supercomputers that have the capability to perform up to around a hundred quadrillion FLOPS, with the majority of the fastest using Linux as their operating system. A high-performance computing system does not necessarily contain any components that you would not find in a general-purpose computer. The difference is mainly in quantity, as HPCs are composed of computing clusters configured to work together. Whereas a general-purpose computer typically contains a single processor, a supercomputer contains several processors, each comprised of anywhere between two and four cores. Each individual computer within an HPC cluster is referred to as a node, so a supercomputer with 64 nodes may have up to 256 cores, all working in tandem. When high numbers of individual nodes work efficiently together, they can often solve problems that would be too complex for a single computer to solve by itself.
Managing End-User Development
Published in Steven F. Blanding, Enterprise Operations Management, 2020
John Windsor, Leon A. Kappelman, Carl Stephen Guynes
Client/server systems allow organizations to put applications on less expensive workstations, using them instead of expensive mainframe and midrange systems as clients or servers. Existing mainframes can be used as enterprise data management and storage systems, while most daily activity is moved to lower-cost networked platforms. Server performance is also cheaper than equivalent mainframe performance. Microcomputer MIPS (e.g., millions of instructions per second, a performance measurement) can provide a cost advantage of several hundred to one, compared with mainframe MIPS. Another cost advantage is that client/server data base management systems are less expensive than mainframe DBMSs. Moreover, the client/server model provides faster performance for CPUintensive applications, because much of the processing is done locally, and the applications do not have to compete for mainframe central processing unit time.
Introduction to computer architecture
Published in Joseph D. Dumas, Computer Architecture, 2016
MIPS is normally a measure of millions of integer instructions that a processor can execute per second. Some programs, particularly scientific and engineering applications, place a much heavier premium on the ability to perform computations with real numbers, which are normally represented on computers in a floating-point format. If the system is to be used to run that type of code, it is much more appropriate to compare CPU (or floating-point unit [FPU]) performance by measuring millions of floating-point operations per second (MFLOPS) rather than MIPS. (Higher-performance systems may be rated in gigaflops [GFLOPS], billions of floating-point operations per second, or teraflops [TFLOPS], trillions of floating-point operations per second.) The same caveats mentioned with regard to MIPS measurements apply here. Beware of peak (M/G/T) FLOPS claims; they seldom reflect the machine’s performance on any practical application, let alone the one you are interested in. Vector-or array-oriented machines may only be able to approach their theoretical maximum FLOPS rate when running highly vectorized code; they may perform orders of magnitude worse on scalar floating-point operations. The best comparison, if it is feasible, is to see what FLOPS rate can be sustained on the actual application(s) of interest, or at least some code with a similar mix of operations.
DALMIG: Matching-Based Data Center Allocation and Dual Live VM Migration in Cluster-Based Federated Cloud
Published in IETE Journal of Research, 2023
Jeny Varghese, Jagannatha Sreenivasaiah
Definition 1: MIPS- The Million Instructions Per Second (MIPS) is utilized to measure the speed of the processer residing in the virtual machine. VMs with the same MIPS are clustered so it is significant to cluster the VMs in the CSP. This metric is beneficial in terms of reducing the task processing time.
Improving flexibility in cloud computing using optimal multipurpose particle swarm algorithm with auction rules
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Seyed Ebrahim Dashti, Mohammad Zolghadri, Fatemeh Moayedi
Here are four types of virtual machines, the specifications of which are shown in Table 3. The processor frequency of each server is mapped to MIPS. MIPS is a unit of measurement of computer performance that shows the rate of millions of instructions executed in a program per unit of time.