Explore chapters and articles related to this topic
Modern Deep Learning for Modeling Physical Systems
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Nicholas Geneva, Nicholas Zabaras
Fortunately, with probabilistics being a fundamental aspect of all machine learning, integrating uncertainty estimation into DNNs is a large on-going research effort which can be applied to UQ. Stochastic components can be added into pre-existing deterministic architectures or complete probabilistic frameworks can encapsulate modern deterministic DNNs. Generally speaking there are two schools of thoughts when approaching modern probabilistic machine learning: Bayesian approaches and generative approaches. Bayesian DNNs provide interpretable uncertainty bounds with direct control over the probabilistic formulation at a cost of implementation and optimization complications. Generative models on the other hand provide increased flexibility with an ability to learn a complex distribution but lack complete interpretability and can require a significant amount of training data.
Introduction to Deep Learning
Published in Lia Morra, Silvia Delsanto, Loredana Correale, Artificial Intelligence in Medical Imaging, 2019
Lia Morra, Silvia Delsanto, Loredana Correale
Generative models refer to a class of methods to estimate the data distribution p(X), usually by approximating it with a function p^θ(X) parameterized by some parameters θ. Generative models can then be used to draw new, previously unseen samples from this distribution. Many generative models have been proposed, but the most popular are Generative Adversarial Networks (GANs), first presented in 2014 by Goodfellow and colleagues [70], which have gained high popularity as the images they generate are usually higher quality and more realistic. Training GANs is not an easy feat as the process is highly unstable and may collapse to degenerate solutions. However, much progress has been made to stabilize their training and GANs are now able to achieve impressive results. So far, they have found applications in the medical domain in data augmentation, domain adaptation, image synthesis from multiple modalities, segmentation and anomaly detection, although it must be noticed that such applications are mostly at a research stage [71, 72].
Detecting DeepFakes for future robotic systems
Published in Brij B. Gupta, Nadia Nedjah, Safety, Security, and Reliability of Robotic Systems, 2020
Eric Tjon, Melody Moh, Teng-Sheng Moh
GANs are generative models created with deep neural networks (Goodfellow et al. 2014). A generative model can take a sample of data and replicate its distribution either explicitly with an output of a function or implicitly with the output of synthetic data. GANs primarily target the latter function and generate samples that mimic the training data. Generative models are particularly suited for many tasks including creating synthetic datasets and manipulating images. GANs improve on many aspects of older generative models. They are able to emulate complex data samples and generate output efficiently once trained. Furthermore, the output of a GAN is subjectively more realistic than other generative methods.
The application of deep generative models in urban form generation based on topology: a review
Published in Architectural Science Review, 2023
Bo Lin, Wassim Jabi, Padraig Corcoran, Simon Lannon
In general, deep generative models are the models with many layers of stochastic or deterministic variables to approximate complex and high dimensional probability distributions (Beirão 2012; Beirão, Duarte, and Stouffs 2011). According to Turhan and Bilge (2018), deep generative models can be categorized into five types, namely, unsupervised fundamental models, Autoencoder (AE) based models, autoregressive models, Generative Adversarial Networks (GAN) based models, and AE-GAN hybrid models. At the end of 2019, another kind of deep generative model, i.e. diffusion models, became very popular (Dieleman 2022). These six types of deep generative models are introduced as follows. Unsupervised fundamental models
Content Prioritization Based on Usage Pattern Analysis
Published in International Journal of Human–Computer Interaction, 2021
Based on the calculated user modeling value, we predict the usage probability of user-manual content by using InfoGAN, which is a generative model that can learn disentangled representations. Generative models aim to generate samples from the real distribution of observations . InfoGAN is an extension of vanilla GAN, which consists of a generator mapping a noise input to an observation and a discriminator mapping an observation to the probability that it was sampled from the real data. GAN training optimizes a game between the generator and the discriminator (Goodfellow et al., 2014):
Using topic modeling to infer the emotional state of people living with Parkinson’s disease
Published in Assistive Technology, 2021
Andrew P. Valenti, Meia Chita-Tegmark, Linda Tickle-Degnen, Alexander W. Bock, Matthias J. Scheutz
LDA is built around the intuition that documents exhibit multiple topics (Blei et al., 2003). LDA makes the assumption that only a small set of topics are contained in a document and that they use a small set of words frequently. The result is that words are separated according to meaning and documents can be accurately assigned to topics. LDA is a generative data model which as the name implies describes how the data is generated. This idea is to treat the data as observations that arise from a generative, probabilistic process, one that includes hidden variables, which represent the structure we want to find in the data. For our data, the hidden variables represent the thematic structure (i.e., the topics) that we do not have access to in our documents. Simply put, a generative model describes how the data is generated, and inference is used to backtrack over the generative model to discover the set of hidden variables which best explains how the data were generated. To express the model as a generative probabilistic process, we start by assuming that there is some number of topics that the document contains, and each topic is a distribution over terms (words) in the vocabulary. Every topic contains a probability for every word in the vocabulary, and each topic is described by a set of words with different probabilities reflecting their membership in the topic. The LDA generative process can be described as follows: