Explore chapters and articles related to this topic
Methods to Predict the Performance Analysis of Various Machine Learning Algorithms
Published in K Hemachandran, Shubham Tayal, Preetha Mary George, Parveen Singla, Utku Kose, Bayesian Reasoning and Gaussian Processes for Machine Learning Applications, 2022
M. Saritha, M. Lavanya, M. Narendra Reddy
Algorithm analysis is a crucial component of theory of computation, since it offers a conceptual estimate of how much resources and time an algorithm will need to solve a given task (Design and Analysis of Algorithm – Tutorialspoint, n.d.). Many programs are made to operate with inputs of any length. The assessment of the length of time and material resources necessary to perform an algorithm is known as algorithm analysis. Typically, an algorithm’s productivity or time slot is defined as a function linking the input duration to the series of phases (time complexity) or the memory space (space complexity) (Analysis of Algorithms | Set 1 (Asymptotic Analysis) – GeeksforGeeks, n.d.). In this section, we will learn about why it is important to analyse algorithms and how to pick the best algorithm for a given issue, because one computing problem can be handled by a variety of algorithms. By studying an approach for a perceived issue, we may begin to build image recognition, allowing this method to handle similar sorts of problems (Analysis of Algorithms, n.d.).
Prelude to Artificial Intelligence: Decision-Making Techniques
Published in Rodgers Waymond, Artificial Intelligence in a Throughput Model, 2020
The domain of Artificial Intelligence has advanced many techniques to automate the process of cognitive analysis and decision-making with distinctive attention paid to situations of high uncertainty. This chapter examines the basic decision-making concepts and algorithmic pathways related to topics such as logic, constraint modeling, and probabilistic modeling, as well as examining new research that utilizes these tools for predictive modeling and decision-making. An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. Further, machine learning is a set of algorithms that enables the software to update and “learn” from previous outcomes without the need for programmer intervention. It is fed with structured data in order to complete an assignment without being programmed how to do so.
High-Level Modeling and Design Techniques
Published in Soumya Pandit, Chittaranjan Mandal, Amit Patra, Nano-Scale CMOS Analog Circuits, 2018
Soumya Pandit, Chittaranjan Mandal, Amit Patra
An algorithm is a sequence of well-defined steps/instructions to be executed for completing a task or solving a problem. A major criterion for a good algorithm is its efficiency, which is measured by the amount of time and memory required to solve a particular problem. In real units, these are expressed in seconds and megabytes. However, these depend on the computing power of the specific machine and on the specific data set. In order to standardize the measurement of the efficiency of an algorithm, the computational complexity theory was developed. This theory allows one to estimate and express the efficiency of an algorithm as a mathematical function of its input size [33, 81]. The input size of an algorithm, in general refers to the number of items in the input data size. For example, when sorting n words, the input size is n.
Optimal scheduling of vehicle-to-Grid power exchange using particle swarm optimization technique
Published in International Journal of Computers and Applications, 2022
Time complexity is defined as the execution time of an algorithm as a function of input size. As the size of input data directly affects the number of steps or instructions in an algorithm, this further affects the execution time. This gives information about variations in execution time as the size of input data increases. It helps us to compare different algorithms developed to execute the same objective. Since these algorithms are not performed with the same input data or on the same work station, they cannot be compared by execution time only. In computer science, the time complexity is generally represented by Big O notation. It is written in the form O(n), where O order of growth and n stands for input data. There are different types of time complexities, such as Figure 18.
Dynamic maintenance model for a repairable multi-component system using deep reinforcement learning
Published in Quality Engineering, 2022
Nooshin Yousefi, Stamatis Tsianikas, David W. Coit
Dynamic programming is an algorithm used to solve complex problems. The problem is solved in distinct stages using recursive functions. The solution of each stage or sub-problem is stored and reused to find the overall optimal solution of the problem. In this paper, dynamic programming is used to find the best policy of Markov decision processes using reinforcement learning. The Bellman equation in the Q learning algorithm, decomposes the overall optimal value into the optimal policy of each step and optimal value of remaining steps. The value function can be used to restore and retrieve the solution of each sub-problem. Q-learning is a well-known algorithm, as a method of dynamic programming, to solve the reinforcement learning problems, which is proposed by Watkins (Watkins 1989). In the Q-learning method, the agent takes one action at any particular state and evaluates its consequences, and by trying actions in all the possible states it learns what are the best actions which have the best long run rewards.
Research on reducing fuzzy test sample set based on heuristic genetic algorithm
Published in Systems Science & Control Engineering, 2021
Zhihua Wang, Manman Cheng, Yongjian Wang
The evaluation index of the algorithm is a measure of the pros and cons of the algorithm, generally considered from the time complexity and space complexity. Time complexity refers to the time complexity of the algorithm. Suppose the initial sample set of the fuzzy test is n, and the fuzzy test time function is f(n), so the time complexity of the algorithm is also written as . As the number of fuzz tests increases, the growth rate of fuzz test execution time is positively related to the growth rate of f(n).The space complexity of the algorithm refers to the memory space consumed by the algorithm. The calculation and representation methods are similar to those of time complexity, which are generally expressed by asymptotic complexity.