Explore chapters and articles related to this topic
Deep Learning in Transportation Cyber-Physical Systems
Published in M.Z. Naser, Leveraging Artificial Intelligence in Engineering, Management, and Safety of Infrastructure, 2023
Zadid Khan, Sakib Mahmud Khan, Mizanur Rahman, Mhafuzul Islam, Mashrur Chowdhury
An automatic encoder or autoencoder is a combination of an encoder and a decoder network. The encoder network extracts the latent features using the training data, and the decoder network uses these features to reconstruct the input (Haghighat et al., 2020). The reconstruction loss is minimized in the training process to get the best representation of encoded features from the input. As the learning process is automatic, it is called an automatic encoder. The first usage of autoencoders was as denoisers. Recently, autoencoders have been very popular for dimensionality reduction and feature extraction. Autoencoders are also useful when labeled data is not available since the training process is automatic and does not require output labels (Haghighat et al., 2020). Another type of autoencoders are variational autoencoders. In this model, a variational inference method is used for minimizing the input reconstruction loss.
Machine Learning and Projection-Based Model Reduction in Hydrology and Geosciences
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Mojtaba Forghani, Yizhou Qian, Jonghyun Lee, Matthew Farthing, Tyler Hesser, Peter K. Kitanidis, Eric F. Darve
The variational autoencoder (VAE) is a type of AE that projects complex spatial models to a low-dimensional latent space, parameterized with simple probability density functions (e.g., Gaussian) [36]. The architecture of VAE is similar to AE, except its latent space consists of two parallel layers that provide the mean and variance of a multivariate Gaussian distribution, from which the variable that is fed into the decoder is sampled. This probabilistic structure imposes a strong regularization effect on the VAE architecture that leads to better generalization. VAE has been used in recent works for low-dimensional representation of geological patterns. Specifically, Jiang et al [36] used VAE in conjunction with stochastic gradient-based inversion methods to perform history matching on reservoir model. They present history matching results for both when the training data is based on a single as well as multiple geologic scenarios, leading to diverse features in the training data. Their comparison shows better performance of VAE compared to PCA. They report that the performance difference between the two methods becomes more significant when multiple geologic scenarios are present. Laloy et al. [50] used VAE to construct a parametric low-dimensional model of complex binary geological media. They found that their approach outperforms the PCA for unconditional geostatistical simulation of a channelized prior model. They also report achieving compression ratios of 200–500. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data is also used to demonstrate their inversion. For the 2D case study, they observe superior performance of VAE compared to the state-of-the-art sequential geostatistical resampling (SGR). They also report encouraging results for their 3D case study.
Cancer Diagnosis from Histopathology Images Using Deep Learning: A Review
Published in Ranjeet Kumar Rout, Saiyed Umer, Sabha Sheikh, Amrit Lal Sangal, Artificial Intelligence Technologies for Computational Biology, 2023
Vijaya Gajanan Buddhavarapu, J. Angel Arul Jothi
A variational autoencoder is an autoencoder that follows a latent variable model. VAE is called a generative model because while training, the model learns the probability distribution of the dataset. The input is encoded into a latent distribution. A sample is then drawn from the distribution and is then decoded to calculate reconstruction error. Finally the error is backpropagated through the network [45, 73, 18]. Figure 9.10 shows the structure of a VAE.
Variational Autoencoder for Classification and Regression for Out-of-Distribution Detection in Learning-Enabled Cyber-Physical Systems
Published in Applied Artificial Intelligence, 2022
Feiyang Cai, Ali I. Ozdagli, Xenofon Koutsoukos
Variational AutoEncoder (VAE) is an autoencoder introduced by Kingma and Welling (2014), which models the relationship between high-dimensional observations and representations in a latent space in a probabilistic manner. Aiming to disentangle the representations by the regression variable, a VAE for regression model is presented in Zhao et al. (2019), where the latent space is conditioned by the output of an additional regression branch. Such a model can be adapted for the classification problem, and Figure 1 shows the architecture of the VAE for classification and regression model. The predictor module in the figure can be a standard classification or regression network.
Detection of Alfvén Eigenmodes on COMPASS with Generative Neural Networks
Published in Fusion Science and Technology, 2020
Vít Škvára, Václav Šmídl, Tomáš Pevný, Jakub Seidl, Aleš Havránek, David Tskhakaya
To this end, two approaches based on generative neural networks have been implemented. Generative models based on the variational autoencoder (VAE) paradigm8 have been used because they are powerful estimators of high-dimensional distributions, suitable for modeling image data. They do not require labels for training, which is a limiting factor for classification neural networks that overfit when not supplied with enough labeled data. Also, VAEs possess an ability to produce a low-dimensional representation of target data, which proved to be useful for our task as well.
Towards Autonomous Driving Model Resistant to Adversarial Attack
Published in Applied Artificial Intelligence, 2023
Kabid Hassan Shibly, Md Delwar Hossain, Hiroyuki Inoue, Yuzo Taenaka, Youki Kadobayashi
An autoencoder (Rumelhart, Hinton, and Williams 1986) is a type of neural network that is trained to reconstruct an input by learning to compress and then reconstruct the input data. It consists of two parts: an encoder, which maps the input data to a lower-dimensional representation called the latent space, and a decoder, which maps the latent representation back to the original input data. During training, the autoencoder is presented with a set of input samples and is asked to reconstruct them using the encoder and decoder. The goal of the autoencoder is to learn a function that can accurately reconstruct the input data from the latent representation. To measure the quality of the reconstruction, a reconstruction loss is used, which is a measure of the deviation between the input data and the reconstructed data. One issue with traditional autoencoder and the Variational AutoEncoder (VAE) (Kingma and Welling 2013), use a”pixel-level” reconstruction error such as the distance or the log-likelihood to measure the deviation of the reconstructed samples from the input samples. The error can be too strict and result in overly smooth reconstructions. To address this issue, we add a discriminator to distinguish between the true input samples and the reconstructed samples. This helps to reduce blurriness and improve detail in the reconstruction, as the discriminator focuses on similarities in the overall distribution. The Memory Module is a key component that enables the defense mechanism to learn and retain information about the original data distribution, even in the presence of adversarial examples. By combining the autoencoder with adversarial training, we can improve the accuracy and detail of reconstructions while still being able to detect anomalies and defend against attacks.