Explore chapters and articles related to this topic
Deep Learning in Transportation Cyber-Physical Systems
Published in M.Z. Naser, Leveraging Artificial Intelligence in Engineering, Management, and Safety of Infrastructure, 2023
Zadid Khan, Sakib Mahmud Khan, Mizanur Rahman, Mhafuzul Islam, Mashrur Chowdhury
An automatic encoder or autoencoder is a combination of an encoder and a decoder network. The encoder network extracts the latent features using the training data, and the decoder network uses these features to reconstruct the input (Haghighat et al., 2020). The reconstruction loss is minimized in the training process to get the best representation of encoded features from the input. As the learning process is automatic, it is called an automatic encoder. The first usage of autoencoders was as denoisers. Recently, autoencoders have been very popular for dimensionality reduction and feature extraction. Autoencoders are also useful when labeled data is not available since the training process is automatic and does not require output labels (Haghighat et al., 2020). Another type of autoencoders are variational autoencoders. In this model, a variational inference method is used for minimizing the input reconstruction loss.
Detection of DeepFakes Using Local Features and Convolutional Neural Network
Published in Loveleen Gaur, DeepFakes, 2023
Shreya Rastogi, Amit Kumar Mishra, Loveleen Gaur
Yuezun Li et al. [21], in their study, explained a generation technique of the DF. Figure 6.4 depicts the complete workflow of the simple DF generation. Faces of the subject are recognized in an incoming clip, and feature points are collected subsequently. The landmarks are utilized to coordinate the features in a consistent pattern. The affiliated features then were resized and sent through an autoencoder to create contributor pictures with almost the same emotional expressions as the original user’s features. The autoencoder is typically made up of two Convolutional Neural Networks (CNNs): an “encoder” and a “decoder.” The encoder E turns the face of the source target into a code vector. There is just one “encoder” to ensure that identity-independent features, including visual gestures, are captured irrespective of the individuals’ identities sets. Each identity has its decoder, “Di,” that uses the code to construct the person’s visage. In an unsupervised approach, the encoder and decoder were properly trained together, utilizing unrelated picture collections of many individuals.
A Systematic Review of Deep Learning Techniques for Semantic Image Segmentation: Methods, Future Directions, and Challenges
Published in Monika Mangla, Subhash K. Shinde, Vaishali Mehta, Nonita Sharma, Sachi Nandan Mohanty, Handbook of Research on Machine Learning, 2022
Reena, Amanpratap Singh Pall, Nonita Sharma, K. P. Sharma, Vaishali Wadhwa
The objective of autoencoders is to comprehend the inherent data that works better for a predefined task than raw data. These are feed-forward networks which compress the input by reducing their dimensionality and reconstruct output [25]. These are considered unsupervised models, but it is not fallacious even if we tag them self-supervised because they generate labels from training data on their own. An autoencoder consists of three components: encoder, code, and decoder. Before training of data in an auto-encoder, few parameters are needed to be set, i.e., size of code, a number of layers, nodes per layer, and loss function. The basic architecture of auto-encoder is to refrain some of the nodes in the hidden layers so that model itself can learn important attributes of input. In sparse autoencoders, a loss function is constructed by regularizing the activations. A more generalized model referred to as denoising autoencoder, in which input is reformed by adding noise but the output phase still maintains to get uncorrupted data. A contractive autoencoder is an unsupervised DL technique for the encoding of unmarked training data. Different applications of autoencoders are Abnormality detection, Denoising of data, e.g., images, audio, Image inpainting, Information retrieval.
The application of deep generative models in urban form generation based on topology: a review
Published in Architectural Science Review, 2023
Bo Lin, Wassim Jabi, Padraig Corcoran, Simon Lannon
Autoencoder models are neural networks trained to reconstruct input as output consisting of two parts, i.e. encoder and decoder. These models aim to learn the pattern and characteristics of the data distribution and generate new examples similar to the training examples. There are four kinds of autoencoder models, i.e. undercomplete autoencoders, denoising autoencoders, sparse autoencoders, and variational autoencoders (VAE) (Salakhutdinov 2015; Xu, Li, and Zhou 2015). VAE is one of the widely used and efficient deep generative models (Nikolaev 2018). It is a direct model using learned approximate inference and trained through the gradient based method (Nikolaev 2018). Through VAE, the input image can be encoded as a low-dimensional representation storing the input information. Autoregressive models
Evaluation of nine machine learning methods for estimating daily land surface radiation budget from MODIS satellite data
Published in International Journal of Digital Earth, 2022
Shaopeng Li, Bo Jiang, Shunlin Liang, Jianghai Peng, Hui Liang, Jiakun Han, Xiuwan Yin, Yunjun Yao, Xiaotong Zhang, Jie Cheng, Xiang Zhao, Qiang Liu, Kun Jia
An autoencoder is a type of neural network with a symmetrical structure from encoding to decoding layers with the same numbers of input and output dimensions in an unsupervised or supervised manner (Wang and Liu 2018; Zhang et al. 2020). The encoder layers, as shown in Figure 4, can adaptively learn the abstract features of input data and then represent the complex data in an efficient manner by minimizing the errors between the outputs of the decoder layer and the inputs of the encoder layer. These properties make the autoencoder not only suitable for large samples of data but also effectively reduce the design costs and improve the traditional poor generalizations. Therefore, using the autoencoder algorithm in a deep neural network can efficiently extract the implicit features and yield better estimation results based on an MLP connected behind the autoencoder module. Specifically, the accuracy performance of the SAE model is significantly affected by the construction of the encoder/decoder, neurons of the hidden layer, activation function and Batch size, which will be determined in Section 4.1.
Surface microseismic data denoising based on sparse autoencoder and Kalman filter
Published in Systems Science & Control Engineering, 2022
Xuegui Li, Shuo Feng, Nan Hou, Ruyi Wang, Hanyang Li, Ming Gao, Siyuan Li
The sparse autoencoder uses the idea of sparse coding, introducing a sparse penalty term based on the autoencoder. It can obtain relatively sparser and more concise data features through learning under sparsely constrained conditions. One of the goals of the sparse encoder is to make the hidden layer neurons be ‘inactive’ in most of the time. Assuming that represents the jth activated unit of the hidden layer. For N training samples, in the forward propagation process, the average activation of the jth unit of the hidden layer is Since most of the neurons should be ‘inactive’, the average activation quantity should approximate to a constant ρ nearly zero. ρ is a sparse parameter.