Explore chapters and articles related to this topic
Computational Complexity Analysis for Problems in Elastic Optical Networks
Published in Bijoy Chand Chatterjee, Eiji Oki, Elastic Optical Networks: Fundamentals, Design, Control, and Management, 2020
Bijoy Chand Chatterjee, Eiji Oki
In the following, we classify algorithms from the best-to-worst performance in terms of big O notation. A logarithmic algorithm—O(logn); runtime grows logarithmically in proportion to n. A linear algorithm—O(n); runtime grows directly in proportion to n. A superlinear algorithm—O(nlogn); runtime grows in proportion to n. A polynomial algorithm—O(nc); runtime grows quicker than previous all based on n. An exponential algorithm—O(cn); runtime grows even faster than a polynomial algorithm based on n. A factorial algorithm—O(n!); runtime grows the fastest and becomes quickly unusable for even small values of n. Some of the examples of all those types of algorithms are mentioned as follows. Logarithmic algorithm—O(logn)—binary search. Linear algorithm—O(n)—linear search. Superlinear algorithm—O(nlogn)—heap sort and merge sort. Polynomial algorithm—O(nc)—strassen’s matrix multiplication, bubble sort, selection sort, insertion sort, and bucket sort. Exponential algorithm—O(cn)—tower of Hanoi. Factorial algorithm—O(n!)—determinant expansion by minors, and brute force search algorithm for the traveling salesman problem.
Parametric Mesh Generation for Optimization
Published in S. Ratnajeevan H. Hoole, Yovahn Yesuraiyan R. Hoole, Finite Elements-based Optimization, 2019
S. Ratnajeevan H. Hoole, Yovahn Yesuraiyan R. Hoole
Sorting is a technique that arranges the elements in a certain order. There are many sorting algorithms such as counting sort, bucket sort, radix sort, merge sort, heapsort, quicksort, etc. (Cormen et al., 2011). Each algorithm has its own advantages. Merge sort is one of the best sort algorithms which has n log (n) time complexity for each average, best, and worst case time complexities (ibid.). Another very important reason for selecting merge sort is that this algorithm is easily parallelizable – parallelization on the GPU is the main theme taken up in a subsequent chapter. Figure 7.21 (Merge Sort, n.d.) describes the merge sort in a graphical way. First, the list is split into two pieces and then each is split into two further pieces. This process is repeated until arriving at the singleton list. Then we work our way back up the recursion by merging together the short arrays into larger arrays. Algorithms 7.2 and 7.3 describe merge sort.
S
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
sort the problem of arranging items in a predetermined order. There are dozens of algorithms, the choice of which depends on factors such as the number of items relative to working memory, knowledge of the orderliness of the items or the range of the keys, the cost of comparing keys vs. the cost of moving items, etc. See also quicksort, insertion sort, radix sort, bucket sort, merge sort, shell sort, permutation.
A Strategy for the Linear Time Implementation of the SMVQ Algorithm
Published in IETE Technical Review, 2022
This property allows the SCB generation step to use any integer sorting algorithm. For example, the bucket sort algorithm has an average runtime complexity of O(N + K), where K is the number of buckets. Using bucket sort to build ϕ and sort the codewords by their MSMD reduces the runtime complexity of the SCB generation step to O(3N + 2 K). The next sections discuss the details of two SMVQ implementations (i.e.DV = SMD1 and DV = SMD2).
Big data processing with 1D-Crosspoint Arrays
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
While in the tables provides a basis for comparing the execution speed of listed sorting algorithms, it must be noted that the hardware complexity is different for each algorithm. Particularly for Model 2-C, it is unreasonable to compare without normalizing the time complexities with respect to , because the model deliberately introduces a variable (k) to limit Thus, as compared to Models 2-A and 2-B, where each algorithm is allowed to utilize as many PEs as they need, in Model 2-C, each algorithm cannot utilize more than PEs. Therefore, we normalized for , denoted as , and listed it in Table 2 for each algorithm in Table 1(b). To normalize, k was expressed in terms of and n, and then substituted in for each algorithm. is more meaningful to compare the time complexities of representative algorithms, because N and are the same for all of them, and n can be considered as the only variable with which the efficacy of a sorting algorithm can be assessed. Overall, Valiant's sorting algorithm, Reconfigurable Mesh, 1-D Crosspoint Array all have the same time complexity. Parallel versions of merge, column and bucket sort algorithms have the smallest time complexity, followed by Preparata's and AKS sorting algorithms, and those followed by Batcher's sorting algorithms, when the number of PEs is kept the same across all representative sorting algorithms. Again, this is without taking into account the large constants in the complexity notations. If the constants are incorporated into the sorting times, 1D-Crosspoint Array outperforms Batcher's sorters in terms of number of comparison steps, when and AKS sorters for much larger values of n due to the large constant factor in AKS sorting network's depth. A comprehensive numerical analysis of representative sorting algorithms including the 1D-Crosspoint Array, by taking into account the actual constants in their time complexity formulas will be reported elsewhere.