Explore chapters and articles related to this topic
Value Engineering of Requirements
Published in Phillip A. Laplante, Mohamad H. Kassab, Requirements Engineering for Software and Systems, 2022
Phillip A. Laplante, Mohamad H. Kassab
On the other hand, algorithmic complexity reflects the complexity of the algorithm used to solve the problem. A key distinction between computational complexity theory and analysis of the algorithm is that the latter is devoted to analyzing the number of resources needed by a particular algorithm to solve a concrete problem, whereas the former asks a more general question. Namely, it targets classifying problems that can, or cannot, be solved with appropriately restricted resources. A mathematical notation called big-O notation is used to define an order relation on functions. The big-O form of a function is derived by finding the dominating term f(n). Big-O notation captures the asymptotic behavior of the function. Using this notation, the efficiency of algorithm A is O(f(n)), where, for input size n, algorithm A required at most O(f(n)) operations in the worst case.
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
Computational complexity theory presumes that the problem to be analyzed is well-posed. But in practice, it is often the fuzziness and open-endedness of a problem that makes it hard. If you can categorize or formalize all the possibilities, you may be able in principle to capture this hardness as computational hardness.VT1 Summary: Computational complexity provides other classifications besides NP-hardness. Some problems, including program and model verification, are unsolvable. This partly explains the scarcity of research results and tools to aid in formal modeling. The P-space-hard problems are a step up in complexity from the NP-complete. These include several fundamental problems involving uncertainty, such as stochastic scheduling, queueing networks, and Markov decision processes. The class co-NP helps explain why optimization software spends so much time verifying optimality after it has found the optimal solution. Finally, a problem may be hard in a non-computational sense.
A Look at Top 35 Problems in the Computer Science Field for the Next Decade
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
Deepti Goyal, Amit Kumar Tyagi
Computational complexity theory is very well associated with the relative computational difficulty of computable functions and yet it does not help in covering problems whose solutions are yet to be known. Since most of the issues related to AI are not accompanied by formalisation conventional complexity theory does not define AI-completeness. A complexity theory for AI has been proposed in (Eyal et al., 2007) to address the above-mentioned problem in which there is an equal division of computational process between the computer and a human, i.e., one portion of it is done by the computer while the other bit is performed by the humans. This is formalised by a human-assisted Turing machine. The formalisation defines the complexity of the algorithm, problem complexity and reducibility which in turn allow the equivalence classes to be defined.
Improved Approximation Algorithm for the Fault-Tolerant Facility Placement Problem with Rejection
Published in American Journal of Mathematical and Management Sciences, 2020
The classical, uncapacitated facility location problem (UFL) is one of the most fundamental models in operations research and management science. In the UFL problem, we are given a set of potential locations and a set of customers . Each location has a facility opening cost and a connection cost for assigning customer to connect facility at location The goal is to choose a subset of locations to open facilities and connect each customer to its closest open facility, such that the sum of the facility opening costs and the connection costs is minimized. The UFL problem is NP-hard. In computational complexity theory, a problem is defined as NP-hard when every problem in NP (non-deterministic polynomial-time) can be reduced in polynomial time to it (Garey and Johnson, 1979). For an NP-hard problem, it is impossible to design an exact algorithm running in polynomial time according to the NP-completeness theory (Garey and Johnson, 1979). Thus, much attention has been focused on designing an approximation algorithm with good performance ratio. An approximation algorithm with performance ratio is a polynomial time algorithm that finds a solution whose objective function value is within a factor of from that of the optimal solution. We call the performance ratio of the algorithm. For the non-metric version, Hochbaum (1982) presented a greedy algorithm with performance ratio of For the metric version, Shmoys, Tardos, and Aardal (1997) presented the first constant approximation algorithm with performance ratio of 3.16, and currently, the best performance ratio is 1.488 due to Li (2013).