Explore chapters and articles related to this topic
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
Computational complexity is the assessment of how much effort is required to solve different problems. It provides a classification tool useful in tackling problems, especially discrete deterministic problems. Use it to tell, in advance, whether a problem is easy or hard. Knowing this won't solve your problem, but it will help you decide what kind of solution method is appropriate. If the problem is easy, you can probably solve it as a linear program or network model, or with other readily available software. If the problem is hard, you usually try solving it as an IP. If IP tools don't work, you will probably have to develop a specialized large-scale method, or seek an approximate solution obtained with heuristics.
C
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
complexity (1) a measure of how complicated a chunk (typically of code or design) is. It represents how complex it is to understand (although this also involves cognitive features of the person doing the understanding) and/or how complex it is to execute the code (for instance, the computational complexity). The complexity evaluation can be performed by considering the computational complexity of the functional part of the system, i.e., the dominant instructions in the most iterative parts of the system. The complexity may be also a measure of the amount of memory used or the time spent in execution of an algorithm.
High-Performance Computing for Advanced Smart Grid Applications
Published in Stuart Borlase, Smart Grids, 2018
Yousu Chen, Huang Zhenyu (Henry), Yousu Chen, Zhenyu (Henry) Huang
Computational complexity used in this context describes the order of computational operations required to solve a specific modeling equation. “Big-O” notation is used as the measure of computational complexity that reflects how much computational capacity is required to solve a specific problem. Note that it does not consider computer memory requirements. Some key equations used in power grid modeling and analysis include power flow solution and contingency analysis, SE, dynamic simulation and dynamic contingency analysis, small signal stability, and dynamic SE.
Configuring products with natural language: a simple yet effective approach based on text embeddings and multilayer perceptron
Published in International Journal of Production Research, 2022
Yue Wang, Xiang Li, Linda L. Zhang, Daniel Mo
Additionally, the proposed approach has an advantage over LSTM and CNN in terms of computational complexity. In computer science, computational complexity is used to quantify the amount of resources, time in particular, required to run the algorithm. The complexities of CNN and LSTM have been proved to be and , respectively, where K is the dimension of word embeddings; n is the filter width in CNN; d is the dimension of the final outputted sequence; and L is the number of words in the text corpus (Shen et al. 2018). Because HAN and ELMo are based on bidirectional LSTM, their complexity is higher than that of LSTM. The complexity of our text-embedding and MLP-based approach is only , which is much lower than the four deep learning-based approaches. Furthermore, being non-deep learning-based, our approach requires a much smaller number of parameters to be estimated in the classifier-training phase. Thus, the embeddings with the MLP achieves a comparable performance but with much less complexity and fewer computational resources. Based on the analysis above, we conclude that the proposed text embeddings based approach is both effective (thanks to its good performance) and efficient (thanks to its low computational complexity).
Condition and criticality-based predictive maintenance prioritisation for networks of bridges
Published in Structure and Infrastructure Engineering, 2022
Georgios M. Hadjidemetriou, Manuel Herrera, Ajith K. Parlikad
Optimising group maintenance policy includes a time complexity of 2n, using Landau’s ‘big-O’ notation, where n is the number of maintenance activities. Time complexity can be decreased to n2, if every group of elements has consecutive activities, as proved by Wildeman et al. (1997). Time complexity for solving a given computation process can be defined as the amount of time taken by an algorithm to run as a function of the length of the input, and thus depends on the number of operations needed to solve or approach such a process (Rosen, 1999). ‘Big-O’ notation has been extensively used to approximate the number of operations an algorithm uses as its input grows. Hence, this notation provides an indication whether a particular algorithm is practical to be used for solving a problem. Therefore, for optimizing the maintenance schedule of stochastic deteriorating bridge elements, a genetic algorithm is used to provide robust results with limited computational capability. The genetic algorithm effectively avoids local optimal points of the overall maintenance cost function.
Fitness-distance balance based artificial ecosystem optimisation to solve transient stability constrained optimal power flow problem
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Yusuf Sonmez, Serhat Duman, Hamdi T. Kahraman, Mehmet Kati, Sefa Aras, Ugur Guvenc
One of the performance metrics used for MHS algorithms is algorithm (or computation) complexity. Computational complexity is an important indicator of the usability and functionality of algorithms. Algorithm complexity can generally be expressed as the amount of time an algorithm takes to perform calculations in a programme. This section presents the investigation of the effect of the FDB method on the computational complexity of the AEO algorithm. The IEEE CEC 2014 Conference (J. J. Liang et al., 2013) contributed significantly to the standardisation of computational complexity. Using the conditions, programmess, and formulas defined in this conference, the computational complexities of the different MHS algorithms were compared with each other. Accordingly, using the F18 benchmark test function and the CEC14 program defined in the IEEE (J. J. Liang et al., 2013), the values of three parameters were calculated for algorithm complexity, with T0 representing the time used to evaluate the F18 problem, T1 the calculation time of the algorithm for the test program, and T2 the time taken by the algorithm to evaluate the F18 problem 200,000 times. The algorithm complexities calculated for the AEO and the FDBAEO variations depending on these definitions are given in Table 7.