Generative Models in Deep Learning
Sergey I. Nikolenko ()
Additional contact information
Sergey I. Nikolenko: Synthesis AI
Chapter Chapter 4 in Synthetic Data for Deep Learning, 2021, pp 97-137 from Springer
Abstract:
Abstract So far, we have mostly discussed discriminative machine learning models that aim to solve a supervised problem, i.e., learn a conditional distribution of the target variable conditioned on the input. In this chapter, we consider generative models whose purpose is to learn the entire distribution of inputs and be able to sample new inputs from this distribution. We will go through a general introduction to generative models and then proceed to generative models in deep learning. First, we will discuss explicit density models that model distribution factors with deep neural networks and their important special case, normalizing flows, and explicit density models that approximate the distribution in question, represented by variational autoencoders. Then we will proceed to the main content, generative adversarial networks, discuss various adversarial architectures and loss functions, and give a case study of style transfer with GANs that is directly relevant to synthetic-to-real transfer.
Date: 2021
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-3-030-75178-4_4
Ordering information: This item can be ordered from
http://www.springer.com/9783030751784
DOI: 10.1007/978-3-030-75178-4_4
Access Statistics for this chapter
More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().