Explore chapters and articles related to this topic
Artificial Intelligence Software and Hardware Platforms
Published in Mazin Gilbert, Artificial Intelligence for Autonomous Networks, 2018
Rajesh Gadiyar, Tong Zhang, Ananth Sankaranarayanan
Once the expected accuracy level of the model predictions is verified with a validation data set, the next step in the workflow pipeline is model inference. Depending on the solution under development, the models are either deployed at the server side or at the edge devices, such as phones or smart cameras. Depending on the processor architecture of the edge devices, techniques such as quantization may be applied on the trained model to shrink its size to fit within limited memory footprint without losing their prediction accuracy levels. An inference step takes in a new data sample into the model and derives a prediction score with accuracy and confidence level. As more data is processed, the model is refined to become more accurate. For instance, as an autonomous car drives, the application pulls in real-time information through sensors, GPS, 360-degree video capture, and more, which it can then use to optimize future predictions.
Healthcare Applications Using Biomedical AI System
Published in Saravanan Krishnan, Ramesh Kesavan, B. Surendiran, G. S. Mahalakshmi, Handbook of Artificial Intelligence in Biomedical Engineering, 2021
S. Shyni Carmel Mary, S. Sasikala
ML is a technique that must be understood to assist AI. This algorithm represents the data structure for the clustering and classification of abnormalities. It consists of statistical inference to give more accuracy in prediction and classification. The algorithm in ML is categorized based on supervised and unsupervised algorithms. The supervised algorithm is learning with input and output that produce the prediction and classification result by mapping. So, based on the trained data, the new data result will be produced. However, the unsupervised algorithm is ideal for the dispersal of data to learn more about the data. So, there is no output, and the algorithm by itself has to discover and deliver the interesting pattern in the data.
Introduction
Published in William M. Mendenhall, Terry L. Sincich, Statistics for Engineering and the Sciences, 2016
William M. Mendenhall, Terry L. Sincich
On the other hand, suppose one petrochemical plant experienced a steel alloy failure rate of 81%. Is this an unusually high failure rate, given the sample rate of 47%? The theory of statistics uses probability to measure the uncertainty associated with an inference. It enables engineers and scientists to calculate the probabilities of observing specific samples or data measurements, under specific assumptions about the population. These probabilities are used to evaluate the uncertainties associated with sample inferences; for example, we can determine whether the plant’s steel alloy failure rate of 81% is unusually high by calculating the chance of observing such a high rate given the sample information.
ABAFT: an adaptive weight-based fusion technique for travel time estimation using multi-source data with different confidence and spatial coverage
Published in Journal of Intelligent Transportation Systems, 2023
Sara Respati, Edward Chung, Zuduo Zheng, Ashish Bhaskar
From the probability theory, Choi and Chung (2002) proposed an estimation model using the Bayesian polling technique to fuze travel time data from loop detectors and GPS. Westerman et al. (1996) developed a model using a Bayesian estimator to estimate link mean speed by combining data from probe vehicles and loop detectors in the California Partners for Advanced Transit and Highways (PATH) project. Also, Liu et al. (2016) proposed an improvement of the Bayesian method, an iterative Bayesian, to improve the fusion of loop detectors and GPS data. Additionally, Mil and Piantanakulchai (2018) developed a Bayesian-based model that incorporates two factors: the different traffic conditions classified by a Gaussian Mixture (GM) model and the bias in the individual sensor estimation by introducing a non-zero mean Gaussian distribution. The non-zero mean distribution represents noises with the bias of the sensor’s estimation. Bayesian inference, a probability-based method, relies on the probability/density function to express data uncertainty. The method requires prior knowledge about an event to calculate the posterior probability of the hypothesis. It updates beliefs in response to new evidence that would cause change (Box, 1992).
Trip chaining of bicycle and car commuters: an empirical analysis of detours to secondary activities
Published in Transportmetrica A: Transport Science, 2022
Florian Schneider, Winnie Daamen, Serge Hoogendoorn
The objective of this paper is to analyse the effects of different types of activities on detours, which are caused by the inclusion of a secondary activity into a commute tour. To answer the related research questions, inference about the relationships between the type of secondary activity and commute tour extensions is necessary. Unlike with machine learning techniques (where prediction is often the principal interest), inference is conceptually at the core of statistical modelling (Bzdok, Altman, and Krzywinski 2018). For this reason, this section describes how the postulated conceptual model (see Figure 2) was translated into a statistical model that reveals the effect of a set of explanatory variables on calculated commute tour extensions. First, we specify the included explanatory variables in Section 4.1. Then, we explain how these variables are coded in Section 4.2. And finally, we describe the chosen statistical procedure to estimate the effect of each explanatory variable.
Survey of computational intelligence as basis to big flood management: challenges, research directions and future work
Published in Engineering Applications of Computational Fluid Mechanics, 2018
Farnaz Fotovatikhah, Manuel Herrera, Shahaboddin Shamshirband, Kwok-wing Chau, Sina Faizollahzadeh Ardabili, Md. Jalil Piran
Bayesian inference uses probability theory to represent all forms of uncertainty. This way, an initial formulation of a subjective knowledge of any problem might be defined by a model expressing qualitative aspects of interest. A prior probability distribution of the unknown parameters is associated to this model. A posterior probability distribution is then proposed combining the likelihood for the parameters (given the data) and the previous knowledge. This is the case of adding as much information available as possible into the models. The objective is to reduce the uncertainty associated with the different sources of information (Merz & Blöschl, 2008a, 2008b). Flood frequency analysis is one of the most common challenges in which a Bayesian framework has shown its suitability (Viglione, Merz, Salinas, & Blöschl, 2013).