Explore chapters and articles related to this topic
Parallel Algorithms
Published in Vivek Kale, Parallel Computing Architectures and APIs, 2019
In computational complexity theory, problems are divided into several complexity classes according to their running times on a single processor system or a deterministic Turing machine. Tractable problems: Problems whose running times are upper bounded by polynomials in n are said to belong to the P class and are generally considered to be tractable. Even if the polynomial is of a high degree such that a large problem requires years of computation on the fastest available supercomputer, there is still hope that with improvements in the algorithm or in computer performance, a reasonable running time may be obtained.Intractable problems: Problems for which the best-known deterministic algorithm runs in exponential time are intractable. A problem of this kind for which, when given a solution, the correctness of the solution can be verified in polynomial time, is said to belong to the non-deterministic polynomial (NP) class.
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
Computational complexity is the assessment of how much effort is required to solve different problems. It provides a classification tool useful in tackling problems, especially discrete deterministic problems. Use it to tell, in advance, whether a problem is easy or hard. Knowing this won't solve your problem, but it will help you decide what kind of solution method is appropriate. If the problem is easy, you can probably solve it as a linear program or network model, or with other readily available software. If the problem is hard, you usually try solving it as an IP. If IP tools don't work, you will probably have to develop a specialized large-scale method, or seek an approximate solution obtained with heuristics.
High-Performance Computing for Advanced Smart Grid Applications
Published in Stuart Borlase, Smart Grids, 2018
Yousu Chen, Huang Zhenyu (Henry), Yousu Chen, Zhenyu (Henry) Huang
Computational complexity used in this context describes the order of computational operations required to solve a specific modeling equation. “Big-O” notation is used as the measure of computational complexity that reflects how much computational capacity is required to solve a specific problem. Note that it does not consider computer memory requirements. Some key equations used in power grid modeling and analysis include power flow solution and contingency analysis, SE, dynamic simulation and dynamic contingency analysis, small signal stability, and dynamic SE.
Configuring products with natural language: a simple yet effective approach based on text embeddings and multilayer perceptron
Published in International Journal of Production Research, 2022
Yue Wang, Xiang Li, Linda L. Zhang, Daniel Mo
Additionally, the proposed approach has an advantage over LSTM and CNN in terms of computational complexity. In computer science, computational complexity is used to quantify the amount of resources, time in particular, required to run the algorithm. The complexities of CNN and LSTM have been proved to be and , respectively, where K is the dimension of word embeddings; n is the filter width in CNN; d is the dimension of the final outputted sequence; and L is the number of words in the text corpus (Shen et al. 2018). Because HAN and ELMo are based on bidirectional LSTM, their complexity is higher than that of LSTM. The complexity of our text-embedding and MLP-based approach is only , which is much lower than the four deep learning-based approaches. Furthermore, being non-deep learning-based, our approach requires a much smaller number of parameters to be estimated in the classifier-training phase. Thus, the embeddings with the MLP achieves a comparable performance but with much less complexity and fewer computational resources. Based on the analysis above, we conclude that the proposed text embeddings based approach is both effective (thanks to its good performance) and efficient (thanks to its low computational complexity).
Fitness-distance balance based artificial ecosystem optimisation to solve transient stability constrained optimal power flow problem
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Yusuf Sonmez, Serhat Duman, Hamdi T. Kahraman, Mehmet Kati, Sefa Aras, Ugur Guvenc
One of the performance metrics used for MHS algorithms is algorithm (or computation) complexity. Computational complexity is an important indicator of the usability and functionality of algorithms. Algorithm complexity can generally be expressed as the amount of time an algorithm takes to perform calculations in a programme. This section presents the investigation of the effect of the FDB method on the computational complexity of the AEO algorithm. The IEEE CEC 2014 Conference (J. J. Liang et al., 2013) contributed significantly to the standardisation of computational complexity. Using the conditions, programmess, and formulas defined in this conference, the computational complexities of the different MHS algorithms were compared with each other. Accordingly, using the F18 benchmark test function and the CEC14 program defined in the IEEE (J. J. Liang et al., 2013), the values of three parameters were calculated for algorithm complexity, with T0 representing the time used to evaluate the F18 problem, T1 the calculation time of the algorithm for the test program, and T2 the time taken by the algorithm to evaluate the F18 problem 200,000 times. The algorithm complexities calculated for the AEO and the FDBAEO variations depending on these definitions are given in Table 7.
Beyond geometric complexity: a critical review of complexity theory and how it relates to architecture engineering and construction
Published in Architectural Science Review, 2019
Evangelos Pantazis, David Jason Gerber
Based on the definition of a Turing Machine, Kolmogorov defined algorithmic complexity as the length of the description provided to a computer system in order to perform complete a task (Kolmogorov 1965). This highly compressed description of the regularities in the observed system, also called a ‘schema’, can be used to define the complexity of a system or artificial intelligence (AI) computing machine (Minsky 1961). Algorithmic complexity is also called descriptive, or Kolmogorov complexity in the literature depending on the scientific community and is defined as finding the universally shortest description an object or process (Chaitin 1990). If we consider a computer with specific hardware and software specifications, then algorithmic complexity is defined as the length of the shortest program that describes all the necessary steps for performing a process, i.e. print a string. Algorithmic complexity in many cases fails to meet our intuitive feeling about what is complex and what is not. For instance, consider if we compare Aristotle’s works to an equally long passage written by proverbial monkeys, the latter is likely to be more random and therefore have much greater complexity. Bennet introduced ‘logical depth’ as a way to extend algorithmic complexity and averaged the number of steps over the relevant programs using a natural weighting procedure that heavily favours shorter program (Bennett 1995; Lloyd and Pagels 1988). Suppose you want to do a task of trivial algorithmic complexity, such as print message with only 0s, then the depth is very small. But if the example with the random passage from the monkeys is considered, the algorithmic complexity is really high, but the depth is low, since the shortest program is ‘Print’ followed by the input message in the form of a ‘string’ of characters. In the field of mathematical problem solving, computational complexity is defined as the difficulty of executing a task in terms of computational resources (Cover and Thomas 2012) (Figure 8).