Explore chapters and articles related to this topic
Scalability
Published in Vivek Kale, Digital Transformation of Enterprise Architecture, 2019
The ideal scale-out behavior of a query has a linear relationship between the number of nodes and the amount of data that can processed in a certain time. The theoretical linear scale-out is hardly achieved, because a certain fraction of the query processing is normally not parallelizable, such as the coordinated startup of the processing or exclusive access to shared data structures. The serial part of the computation limits its parallel scalability—this relationship has been formulated as Amdahl’s Law: Let σ be the portion of the program that is sequential, and p be the number of processors (or nodes). The maximal speedup S(p) is then given by S(p)=p1+σ(p−1)
Algorithm/Architecture Coexploration
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Gwo Giun (Chris) Lee, He Yuan Lin, Sun Yuan Kung
Amdahl’s law introduced a theoretical maximum speedup for parallelizing a software program [21]. The theoretical upper bound is determined by the ratio of sequential part within the program, since the sequential part cannot be paralleled due to the high data dependencies. Amdahl’s law provided an initial idea in characterizing parallelism. In a similar manner, the instruction-level parallelism (ILP) that is more specific for processor-oriented platforms is quantified at a coarser data granularity based on the graph theory [22]. The parallelization potential defined based on the ratio between the computational complexity and the critical path length is also capable of estimating the degree of parallelism [23]. The computational complexity is measured by means of the total number of operations, and the critical path length is then defined as the largest number of operations that have to be sequentially performed. The parallelization potential based on the number of operations reveals more intrinsic parallelism measurements at a finer data granularity as compared to Amdahl’s law and the ILP method.
High-Performance Computing for Fluid Flow and Heat Transfer
Published in W.J. Minkowycz, E.M. Sparrow, Advances in Numerical Heat Transfer, 2018
When parallelization is performed at the local level (e.g., as in auto- parallelizing compilers), Amdahl’s law, which says that speed is basically determined by the least efficient part of the code, becomes important. To achieve high efficiency, the portion of the code that cannot be parallelized has to be very small.
A new model for cloud elastic services efficiency
Published in International Journal of Parallel, Emergent and Distributed Systems, 2019
Sasko Ristov, Roland Mathá, Dragi Kimovski, Radu Prodan, Marjan Gusev
Many researchers have used the typical definition of efficiency, which is based on Amdahl’s Law, which defines the efficiency as the ratio of speedup and the number of processors. For example, Tsai et al. [24] uses the same definition, but without directly connection with the cloud elastic and heterogeneous environment. However, we present several deficiencies for the traditional efficiency definition in this paper. It assumes a homogeneous environment (6), which does not comply with today’s computing resources. If early clusters were homogeneous, as well as symmetric multiprocessors, most of today’s high performance computing systems are distributed and heterogeneous. In order to solve these issues, Hwang et al. [25] define the efficiency as the percentage of maximal achievable performance. They assume a heterogeneous environment and extend the definition as a ratio of the achieved speedup and the total heterogeneous cluster computing units (ECUs).