Explore chapters and articles related to this topic
Why Is It So Difficult?
Published in Nicolas Sabouret, Lizete De Assis, Understanding Artificial Intelligence, 2020
The complexity of an algorithm is simply the number of operations necessary for the algorithm to resolve the problem in question. For example, consider the “preparing an omelet” algorithm below:For each guest:If the carton of eggs is empty:Open a new carton of eggs.Take an egg.Break open the egg on the bowl.Scramble.Pour in the frying pan.Repeat 60 times:Let cook five seconds.Scramble.
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
Computational complexity is the assessment of how much effort is required to solve different problems. It provides a classification tool useful in tackling problems, especially discrete deterministic problems. Use it to tell, in advance, whether a problem is easy or hard. Knowing this won't solve your problem, but it will help you decide what kind of solution method is appropriate. If the problem is easy, you can probably solve it as a linear program or network model, or with other readily available software. If the problem is hard, you usually try solving it as an IP. If IP tools don't work, you will probably have to develop a specialized large-scale method, or seek an approximate solution obtained with heuristics.
C
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
complexity (1) a measure of how complicated a chunk (typically of code or design) is. It represents how complex it is to understand (although this also involves cognitive features of the person doing the understanding) and/or how complex it is to execute the code (for instance, the computational complexity). The complexity evaluation can be performed by considering the computational complexity of the functional part of the system, i.e., the dominant instructions in the most iterative parts of the system. The complexity may be also a measure of the amount of memory used or the time spent in execution of an algorithm.
Configuring products with natural language: a simple yet effective approach based on text embeddings and multilayer perceptron
Published in International Journal of Production Research, 2022
Yue Wang, Xiang Li, Linda L. Zhang, Daniel Mo
Additionally, the proposed approach has an advantage over LSTM and CNN in terms of computational complexity. In computer science, computational complexity is used to quantify the amount of resources, time in particular, required to run the algorithm. The complexities of CNN and LSTM have been proved to be and , respectively, where K is the dimension of word embeddings; n is the filter width in CNN; d is the dimension of the final outputted sequence; and L is the number of words in the text corpus (Shen et al. 2018). Because HAN and ELMo are based on bidirectional LSTM, their complexity is higher than that of LSTM. The complexity of our text-embedding and MLP-based approach is only , which is much lower than the four deep learning-based approaches. Furthermore, being non-deep learning-based, our approach requires a much smaller number of parameters to be estimated in the classifier-training phase. Thus, the embeddings with the MLP achieves a comparable performance but with much less complexity and fewer computational resources. Based on the analysis above, we conclude that the proposed text embeddings based approach is both effective (thanks to its good performance) and efficient (thanks to its low computational complexity).
Condition and criticality-based predictive maintenance prioritisation for networks of bridges
Published in Structure and Infrastructure Engineering, 2022
Georgios M. Hadjidemetriou, Manuel Herrera, Ajith K. Parlikad
Optimising group maintenance policy includes a time complexity of 2n, using Landau’s ‘big-O’ notation, where n is the number of maintenance activities. Time complexity can be decreased to n2, if every group of elements has consecutive activities, as proved by Wildeman et al. (1997). Time complexity for solving a given computation process can be defined as the amount of time taken by an algorithm to run as a function of the length of the input, and thus depends on the number of operations needed to solve or approach such a process (Rosen, 1999). ‘Big-O’ notation has been extensively used to approximate the number of operations an algorithm uses as its input grows. Hence, this notation provides an indication whether a particular algorithm is practical to be used for solving a problem. Therefore, for optimizing the maintenance schedule of stochastic deteriorating bridge elements, a genetic algorithm is used to provide robust results with limited computational capability. The genetic algorithm effectively avoids local optimal points of the overall maintenance cost function.
Optimal scheduling of vehicle-to-Grid power exchange using particle swarm optimization technique
Published in International Journal of Computers and Applications, 2022
Time complexity is defined as the execution time of an algorithm as a function of input size. As the size of input data directly affects the number of steps or instructions in an algorithm, this further affects the execution time. This gives information about variations in execution time as the size of input data increases. It helps us to compare different algorithms developed to execute the same objective. Since these algorithms are not performed with the same input data or on the same work station, they cannot be compared by execution time only. In computer science, the time complexity is generally represented by Big O notation. It is written in the form O(n), where O order of growth and n stands for input data. There are different types of time complexities, such as Figure 18.