Explore chapters and articles related to this topic
Machine learning
Published in Catherine Dawson, A–Z of Digital Research Methods, 2019
Machine learning refers to the scientific process of facilitating computers to ‘learn’ (or optimise) by themselves through self-adaption methods and techniques. It is a subfield of artificial intelligence (the theory and development of intelligence in machines), which includes computer vision, robotics and natural language processing, for example. Machine learning is a category of algorithm (a procedure or formula for solving a problem) that allows software applications to learn from previous computations to make repeatable decisions and predictions without the need for explicit, rules-based programming. Algorithms can identify patterns in observed data, build models and make predictions, with the aim of generalising to new settings. Practical applications include fraud detection, credit scoring, email spam and malware filtering, self-driving cars, face recognition technologies and video surveillance in crime detection, for example. Machine learning is also a method used in data analytics (Chapter 10), in particular, in predictive analytics, which is used to make predictions based on historical data and trends in the data. More information about predictive modelling can be found in Chapter 45.
Secure Data Storage and Retrieval Operations Using Attribute-Based Encryption for Mobile Cloud Computing
Published in T. Ananth Kumar, T. S. Arun Samuel, R. Dinesh Jackson Samuel, M. Niranjanamurthy, Privacy and Security Challenges in Cloud Computing, 2022
S. Usharani, K. Dhanalakshmi, P. Manju Bala, R. Rajmohan, S. Jayalakshmi
In the world of data management, cloud computation is an emerging technology. Cloud computing uses computational services that are distributed as a utility over a network such as Internet. Cloud computing defines highly scalable services supplied as an external commodity on a pay-as-you-use basis over the Internet. Cloud computing’s name derives from the cloud-shaped icon used to elucidate a web-connected remote resource. It can be described as follows. Most of them don’t think where the power is produced or transported. They simply use the electricity without any knowledge.
Applied Analysis
Published in Nirdosh Bhatnagar, Introduction to Wavelet Transforms, 2020
Basics of asymptotic behavior of functions, and different algorithmic-complexity classes are studied in this section. An algorithm is a finite step-by-step procedure to execute a computational task on a computer. Such steps are known as an algorithm. Asymptotic behavior of functions is usually used to describe the computational complexity of algorithms, and also the amount of computer memory needed to execute them. Study of algorithmic-complexity classes helps in classifying the algorithms based upon their complexity.
Assessing the Behavioral Intention of Individuals to Use an AI Doctor at the Primary, Secondary, and Tertiary Care Levels
Published in International Journal of Human–Computer Interaction, 2023
Pelin Uymaz, Ali Osman Uymaz, Yakup Akgül
AI, often known as “computational intelligence” or “the science and engineering of constructing intelligent machines” refers to the rapidly expanding discipline of emulating intelligence (Radanliev & De Roure, 2022b), human-like behavior in computers and other technologies (Amisha et al., 2019; Ebermann et al., 2023). In 1950, Alan Turing, one of the pioneers of contemporary computers and AI, developed the “Turing test” which was based on the idea that intelligent behavior in a machine is the capacity to do cognition-related activities at a level comparable to that of a human (Mintz & Brodie, 2019). Edward Shortliffe created the MYCIN AI system in the 1970s, which was used to diagnose blood-borne bacterial infectious diseases and recommend the use of antibiotics (Guo & Li, 2018). Since then, disease diagnosis and treatment have been a focus of AI technology (Davenport & Kalakota, 2019).
Computed compatibility: examining user perceptions of AI and matchmaking algorithms
Published in Behaviour & Information Technology, 2023
The definition of AI has gone through several permutations since its coinage in 1956. Broadly, AI is understood as a computational system that can think and learn to match and surpass human intelligence (Russell and Norvig 2002). While AI has a multitude of utilities, in this study we focus on AI’s ability to make decisions without human intervention (Almada 2019). Within this context of decision-making, AIs are commonly understood as algorithms that process vast amounts of data (or ‘big data’) using complex statistical and machine learning models to arrive at optimised decisions (Newell and Marabelli 2015). Several industries – finance, healthcare, human resource management, judiciary, and more – are delegating their decision-making tasks to these AI algorithms instead of relying on human judgment (for review, see Araujo et al. 2020). This can be attributed to AI’s capacity to interpret large amounts of information at an unparalleled speed to make predictions unaffected by human fragilities such as fatigue (Moreira 2020). However, despite the computational superiority of AI systems, they have often been criticised for being biased (Kaushal, Altman, and Langlotz 2020).
On continuous health monitoring of bridges under serious environmental variability by an innovative multi-task unsupervised learning method
Published in Structure and Infrastructure Engineering, 2023
Alireza Entezami, Hassan Sarmadi, Bahareh Behkamal, Carlo De Michele
Machine learning is a key area of artificial intelligence that intends to develop computational models for learning from data (i.e. features) and perform accurate decision-making for some tasks such as classification, prediction, clustering, anomaly detection, etc. (Alpaydin, 2014). The great advantage of machine learning in SHM is to provide an automated strategy for safety and health assessment of any kinds of civil structures in long- and short-term monitoring programs. Machine learning-aided SHM is often divided into three main categories of supervised, semi-supervised, and unsupervised learning. In these categories, one attempts to learn a model by using training data and make a decision via test data. However, the application of these algorithms relies strongly on the availability of the labels of training samples.