Explore chapters and articles related to this topic
Parametric Mesh Generation for Optimization
Published in S. Ratnajeevan H. Hoole, Yovahn Yesuraiyan R. Hoole, Finite Elements-based Optimization, 2019
S. Ratnajeevan H. Hoole, Yovahn Yesuraiyan R. Hoole
Sorting is a technique that arranges the elements in a certain order. There are many sorting algorithms such as counting sort, bucket sort, radix sort, merge sort, heapsort, quicksort, etc. (Cormen et al., 2011). Each algorithm has its own advantages. Merge sort is one of the best sort algorithms which has n log (n) time complexity for each average, best, and worst case time complexities (ibid.). Another very important reason for selecting merge sort is that this algorithm is easily parallelizable – parallelization on the GPU is the main theme taken up in a subsequent chapter. Figure 7.21 (Merge Sort, n.d.) describes the merge sort in a graphical way. First, the list is split into two pieces and then each is split into two further pieces. This process is repeated until arriving at the singleton list. Then we work our way back up the recursion by merging together the short arrays into larger arrays. Algorithms 7.2 and 7.3 describe merge sort.
A Case Study: Vision Control
Published in Ivan Cibrario Bertolotti, Gabriele Manduchi, Real-Time Embedded Systems, 2017
Ivan Cibrario Bertolotti, Gabriele Manduchi
The big-O notation provides a very important measurement of the efficiency for computer algorithms, which normally become unmanageable when the dimension of the problem increases. Take as an example the algorithms for sorting a given array of values. Elementary sorting algorithms such as Bubble Sort or Insertion Sort require a number of operation that is proportional to N2, where N is the dimension of the array to be sorted and therefore are O(N2). Other sorting algorithms, such as Shell Sort and Quick Sort are instead O(Nlog(N)). This implies that for very large arrays, only the latter algorithms can be used in practice because the number of operations becomes orders of magnitude lower in this case.
Parallel Structures
Published in M. Michael Vai, Vlsi Design, 2017
Before the end of this chapter, we present a parallel algorithm that sorts n numbers with n PEs connected in a linear array. The objective of this algorithm is to sort a series of n numbers into an ascending order or a descending order. Sorting algorithms implemented in a sequential architecture have complexity ranging from O(nlog2n) to O(n2). This parallel algorithm has a complexity of O(n).
Adaptive Order-of-Addition Experiments via the Quick-Sort Algorithm
Published in Technometrics, 2023
Dennis K. J. Lin, Jianbin Chen
On a seemingly unrelated issue, the sorting problem arises to order a sequence of elements in programming practice. Quick sort is considered the fastest sorting algorithm among all the sorting algorithms (Hossain et al. 2020). The classical quick sort works by selecting a “pivot” element and partitioning all the other elements into two subsets: where the left/right subset with all elements that are smaller/larger than the pivot elements. The algorithm then recursively sorts both the left and the right subset separately. For a comprehensive discussion on quick sort, one may refer to Hoare (1962), Philippas and Zhang (2003), Yaroslavskiy (2009), and Kushagra et al. (2014) and their references therein.
A Strategy for the Linear Time Implementation of the SMVQ Algorithm
Published in IETE Technical Review, 2022
Recently, in the VQ-compressed domain, many researchers exploited the Vector Quantization (VQ) [12] and Side Match Vector Quantization (SMVQ) [13] image compression algorithms to embed secret information. The main weakness of SMVQ-based data hiding techniques is their lengthy execution times. For each SMVQ index, the SMVQ algorithm builds a state codebook SCB by sorting N codewords in ascending order of their side match distortion (SMD) values. Currently, this step is implemented with comparison-based sorting algorithms (e.g. mergesort). These sorting algorithms limit the SMVQ algorithm’s runtime complexity to O(Nlog2N).
Big data processing with 1D-Crosspoint Arrays
Published in International Journal of Parallel, Emergent and Distributed Systems, 2023
Overall Execution Time: Adding all the terms together, we have which implies that reducing the searching or sorting time beyond offers no benefit. For searching, using a serial algorithm suffices to make the first term since searching each data set of n elements should take time. Therefore, would be equal to in this case. In the case of sorting, a parallel algorithm is required to sort a data set of n elements as a serial sorting algorithm is not sufficient, taking time at best. Parallel sorting algorithms with time complexity of or are well-known ( e.g., see [9, 26]), and sorting n elements with such algorithms will take or time. It is easy to verify that both these terms are less than and hence in this case. Although it may seem that the pursuit for a faster sorting algorithm is no longer needed, this assumes that the data channel width between the memory and the big data would be . If the data channel width can be increased, then can be reduced to or even , in which case, and become the dominating terms of . Under these circumstances, a parallel sorting algorithm with time complexity such as the 1D-Crosspoint Array sorting algorithm suffices to set to .