Explore chapters and articles related to this topic
Methodological and Tool Projects
Published in Peter Hoschka, Computers as Assistants, 2021
That looks pretty easy, but a “real” representation of the graphic domain would have to list a tremendous number of different operators, and draft graphics usually consist of more than just two objects, both facts causing the search space for a planner to grow to a considerable size. It has in fact been shown (Bylander, 1991) that even much-restricted instances of classical planning problems are PSPACE complete in terms of the problem size, which involves the number of objects and operators in the domain—that is, planning, even if it is decidable, falls into one of the nastiest complexity categories.
Worst-case analysis for a leader–follower partially observable stochastic game
Published in IISE Transactions, 2022
Yanling Chang, Chelsea C. White
The entire solution procedure was performed on an Intel 3.10 GHz processor having a 6.00 GB memory. The total computation time for T = 30 was 156.31 seconds, where the PURGE, DOMINANCE, and APPROXIMATION operations accounted for 4.78%, 11.86%, and 83.45%, respectively. As at least a witness point was associated with each γ-vector in the concave approximation MIP (2), the sizes of MIPs in the APPROXIMATION step are significantly larger than those of the MIPs in the PURGE step and DOMINANCE step (could be 10–20 times larger). Note that this worst-case problem is not polynomial solvable (at least PSPACE-hard) and the scalability of the proposed algorithm hinges upon efficiently solving the MIPs (NP-hard) in the three steps, especially in the APPROXIMATION step (Section 8 discusses how to construct good initial feasible solutions to solve these MIPs faster). As the developed solution procedure has computational advantages in comparison with the original problem and can leverage the state-of-the-art MIP solvers for improved computational performance.
Multi-model Markov decision processes
Published in IISE Transactions, 2021
Lauren N. Steimle, David L. Kaufman, Brian T. Denton
We show that, in general, optimal policies that maximize the weighted value function may actually be history dependent, making the problem of maximizing the weighted value function more challenging to solve in certain cases. With the aim of designing policies that are easily translated to practice, we distinguish between two important variants: (i) a case where the DM is limited to policies determined by the current state of the system, which we refer to as the Weighted Value Problem (WVP), and (ii) a more general case in which the DM attempts to find an optimal history-dependent policy based on all previously observed information, which we refer to as the adaptive counterpart to the WVP. We show that the adaptive counterpart is a special case of a Partially-Observable MDP (POMDP) that is PSPACE-hard, and we show that the WVP is NP-hard.
Lattice logic as a fragment of (2-sorted) residuated modal logic
Published in Journal of Applied Non-Classical Logics, 2019
Recall that, by Ladner's theorem Ladner (1977) (see also Blackburn et al., 2001), every normal modal logic between K and S4 has a pspace-hard satisfiability problem. Our frame condition (3), rephrased by using the complement R of the Galois relation of the frame is a seriality-like constraint on , hence ML is a D-like system, and we expect that a pspace bound can be established for the complexity of the satisfiability problem of ML, hence of PLL. However, with interest focused on PLL alone in this article, by single-sorted reduction, Theorem 4.5, a pspace complexity bound for PLL is obtained by a direct application of Ladner's theorem.