Explore chapters and articles related to this topic
Mathematical Background
Published in Alfred J. Menezes, Paul C. van Oorschot, Scott A. Vanstone, Handbook of Applied Cryptography, 2018
Alfred J. Menezes, Paul C. van Oorschot, Scott A. Vanstone
Roughly speaking, polynomial-time algorithms can be equated with good or efficient algorithms, while exponential-time algorithms are considered inefficient. There are, however, some practical situations when this distinction is not appropriate. When considering polynomial-time complexity, the degree of the polynomial is significant. For example, even though an algorithm with a running time of O(nln ln n), n being the input size, is asymptotically slower that an algorithm with a running time of O(n100), the former algorithm may be faster in practice for smaller values of n, especially if the constants hidden by the big-O notation are smaller. Furthermore, in cryptography, average-case complexity is more important than worst-case complexity — a necessary condition for an encryption scheme to be considered secure is that the corresponding cryptanalysis problem is difficult on average (or more precisely, almost always difficult), and not just for some isolated cases.
High-Level Modeling and Design Techniques
Published in Soumya Pandit, Chittaranjan Mandal, Amit Patra, Nano-Scale CMOS Analog Circuits, 2018
Soumya Pandit, Chittaranjan Mandal, Amit Patra
The time complexity of an algorithm is calculated on the basis of the number of required elementary computational steps taken by the algorithm to compute the function it was developed for. The number of steps are interpreted as a function of the input size. However, it may be noted that most of the time, the total number of elementary computational steps varies from input to input because of the presence of conditional statements such as an if-else statement. Therefore, average-case complexity is considered to be a more meaningful characterization of the algorithm. However, accurate determination of the average-case complexity of an algorithm is not an easy task, which necessitates the use of the worst-case complexity metric. The worst-case complexity of an algorithm is the complexity with respect to the worst possible inputs, which gives an upper bound on the average-case complexity.
Resource Allocation for Scalable Videos over Cognitive Radio Networks
Published in Ce Zhu, Yuenan Li, Advanced Video Communications over Wireless Networks, 2017
We next solve the LP relaxation iteratively. During each iteration, we find the lg^,m^, which has the minimum value for (⌈lg^,m^⌉−lg^,m^) or (lg^,m^−⌊lg^,m^⌋) among all fractional lg,m’s, and round it up or down to the nearest integer. We next reformulate and solve a new LP with lg^,m^ fixed. This procedure repeats until all the lg,m’s are fixed. The complete SF algorithm is given in Table 7.1. The complexity of SF depends on the specific LP algorithm (e.g., the simplex method can be used with a polynomial-time average-case complexity) [41].
An improved algorithm for dynamic nearest-neighbour models
Published in Journal of Spatial Science, 2022
This section discusses two algorithms to generate an instance of the Mocnik model: a naïve one in the first subsection, and an improved, much more efficient version in subsequent sections. The improved efficiency of the latter is achieved by partitioning space into cells and using these cells as a spatial index, as is a common approach (Finkel and Bentley 1974, Mattson and Rice 1999). By choosing a suitable cell size, the overall average-case complexity of the algorithm can thus be reduced to . Both the naïve and the improved algorithm are part of NetworKit (Staudt et al. 2016), an open-source toolkit for large-scale network analysis.1 Further improvements concerning running time can be achieved by minor changes, among others, by using global variables instead of local ones, by changing the order of conditions, etc. These improvements do, however, not change the complexity of the algorithms and strongly depend on the programming language, the compiler, and the hardware being used, which is why a discussion of these improvements is not included in this article. The algorithms in NetworKit have been further optimized.
Diagnostic based on estimation using linear programming for partially observable petri nets with indistinguishable events
Published in International Journal of Systems Science: Operations & Logistics, 2020
Amira Chouchane, Philippe Declerck, Atef Khedher, Anas Kamoun
In this article, the fault detection needs two ILP problems for the observed sequence while the fault isolation is established by the resolution of at most ILP problems. An ILP problem is considered as an NP-hard problem and its resolution is done in an exponential time in the worst case (it can be of a polynomial complexity in particular cases). Among classical algorithms solving ILP problems, we can cite the cutting-plane method and the branch-and-bound method, where each iteration is based on a specific linear programming problem. In a similar way, the ILP problems of diagnosis can be solved more efficiently if a relaxation over is made. We can use efficient standard algorithms of linear programming, which can be applied to relatively great full matrices (at least the size can be treated with Scilab or Matlab and usual computers): The simplex algorithm is known to be efficient in practice as it has polynomial-time average-case complexity in some general cases. The modern algorithms of linear programming are polynomial in the worst case (the complexities of the ellipsoid algorithm of Khashiyan and the interior point algorithm of Karmarkar are respectively x and x where n is the number of variables and L is the number of bits necessary in the storage of data.)
Quadtree-based ancestry tree maps for 2D scattered data SLAM
Published in Advanced Robotics, 2018
Ilari Vallivaara, Katja Poikselkä, Anssi Kemppainen, Juha Röning
In this section we will analyze the average case complexity of the algorithm under some simplifying assumptions. The analysis below is adapted from the complexity results presented for DP-SLAM [12] and spatial search hierarchies [33]. Although the assumptions do not hold in the general case, they often hold in practice. Further, experimental data suggests that in the MF-SLAM context the analysis predicts the actual behavior quite closely (see, Section 5). We do not address the worst case complexity here, but refer to [12] and [33] for details for the relevant parts.