Explore chapters and articles related to this topic
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
How big a table is required to solve the knapsack problem by dynamic programming? If the data are integers, and the knapsack capacity is K, the table is n by K+1 large, requiring O(nK) work. It suffices to store one row of the table at a time, but there is no way to avoid the factor of K (in the worst case) work requirement. Dynamic programming is fast only if K is not too large. This DP algorithm is called pseudo-polynomial because it is only fast when K is small. This ties back to our discussion in Section 9.1 about length. In that section we mentioned that the natural parameters of a problem are usually good surrogates for the input length. If the dynamic programming algorithm is polynomial in those natural parameters, we say the algorithm runs in pseudo-polynomial time.
Disassembly Line Balancing
Published in Surendra M. Gupta, A. J. D. (Fred) Lambert, Environment Conscious Manufacturing, 2007
Seamus M. McGovern, Surendra M. Gupta
NP-completeness provides strong evidence that the problem cannot be solved with a polynomial time algorithm (the theory of NP-completeness is applied only to decision problems). Many problems that are polynomial solvable differ only slightly from other problems that are NP-complete. While 3-satisfiability and 3-dimensional matching are NP-complete, the related 2-satisfiability and 2-dimensional matching problems can be solved in polynomial time (polynomial time algorithm design is detailed in Refs. 1 and 46). If a problem is NP-complete, some subproblem (created by additional restrictions being placed on the allowed instances) may be solvable in polynomial time. Certain encoding schemes and mathematical techniques can be used on some size-limited cases of NP-complete problems (an example is a dynamic programming approach to partition that results in a search space size equal to the size of the instance multiplied by the log of the sum of each number in the instance) enabling the solution by what is known as a “pseudopolynomial time algorithm.” In the PARTITION case, this results in a polynomial time solution as long as extremely large input numbers are not found in the instance. Problems for which no pseudopolynomial time algorithm exists (assuming P ≠ NP) are known as “NP-complete in the strong sense” (also, unary NP-complete—referring to allowance for a nonconcise or unary encoding scheme notation where a string of n 1s represents the number n—with an alternative being binary NP-complete). Typical limiting factors for NP-complete problems to have a pseudopolynomial time algorithm include the requirement that the problem be a number problem (versus a graph problem, logic problem, etc.) and be size constrained, typically in the number of instance elements (n in DLBP) and the size of the instance elements (PRTk in DLBP). Formally, an algorithm that solves a problem L will be called a pseudopolynomial time algorithm for L if its time complexity function is bound above as a function of the instance I by a polynomial function of the two variables: Length[I] and Max[I]. A general NP-completeness result implies that the problem cannot be solved in polynomial time in all the chosen parameters.
Batching and scheduling in a continuous-discrete hybrid flowshop: Lagrangian relaxation-based heuristic algorithms
Published in International Journal of Production Research, 2023
In the inner loop, each family-level subproblem is transformed into a set-partitioning problem and solved optimally using branch and price. Since the number of machines is relaxed, it can be assumed that the numbers of parallel machines in both stages are infinite. Therefore, the batches and jobs can be processed in parallel. As a consequence, each family-level subproblem is well-structured as an out-tree, as illustrated in Figure 3, where each sub-tree represents the schedule of a batch with the batch being processed in the first stage, followed by jobs contained in the batch being processed in the second stage. Moreover, each sub-tree can be handled separately. The batch-level subproblem thus becomes a knapsack problem that can be solved by a pseudo-polynomial time dynamic programming algorithm.
An approximate dynamic programming approach for collaborative caching
Published in Engineering Optimization, 2021
Collaborative offline wireless caching (Golrezaei et al.2013) is an efficient mechanism that exploits the spatial correlation of the requests. BSs collaborate with wireless devices that offer their caching space to store popular content. Coded caching can further advance the performance of these systems, as content reuse and cache hit ratio increase, while content delivery delay decreases (Maddah-Ali and Niesen 2014). Poularakis, Iosifidis, and Tassiulas (2014), Poularakis et al. (2016) and Khreishah, Chakareski, and Gharabeih (2016) propos using a large number of SCBSs to lower the data retrieval delays in mobile networks. The cached content in the SCBSs is decided centrally by the MNO. The SCBSs work in concert with an MCBS to deliver content that is not available to the SCBSs, which is requested from the MCBS. Poularakis, Iosifidis, and Tassiulas (2014) first cast the problem as an unsplittable hard capacitated metric facility location problem and solve it by an approximation algorithm. Cache assisted delivery of scalable video data (Thomos et al.2015) is studied by Poularakis et al. (2016), where multiple MNOs collaborate by offering part of their SCBS cache space to other MNOs. It is shown that the cache optimization is an instance of the multiple-choice knapsack problem and admits a pseudo-polynomial time-optimal solution. The relation between partial caching and users' retention rate for video is investigated by Maggi et al. (2018).
Multitasking scheduling problems with two competitive agents
Published in Engineering Optimization, 2020
Shi-Sheng Li, Ren-Xia Chen, Ji Tian
In this section, the problem is studied. Note that this problem is clearly -hard, because the classical problem without multitasking is already -hard. Recall that the tardy tasks should be executed after all on-time tasks, instead of being discarded. An exact dynamic programming algorithm that runs in pseudo-polynomial time is designed to solve the problem. The algorithm also indicates that the unweighted problem is polynomially solvable. The following result characterizes an important property of an optimal solution.