Portrait of author

Casper Kaae Sønderby:
Inference and Learning in Deep Generative Models

Date: 03-09-2018    Supervisor: Ole Winther & Anders Krogh




The last decades have seen an explosion in the availability of data and computational resources. This has spurred a large interest in efficient learning algorithms able to take advantage of the large amounts of available data. Recently Deep Neural Networks have been fundamental in pushing the machine learning field forward with remarkable results in image classification, speech analysis and machine translation. However these models needs large amounts of labelled data for learning which can be both expensive and time consuming to collect.

This thesis explores unsupervised and semi-supervised learning algorithms relying solely on unlabelled or little labelled data for learning. Here the unlabelled data, often readily available in large quantities, is leveraged to improve performance of downstream tasks. Some of the most interesting developments in unsupervised learning is the introduction of models merging the powerful function approximators provided by deep neural networks with the principled probabilistic approach from graphical models. Here the Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) are seminal contributions to this emerging field. In this thesis we develop extensions and improvements to the VAE framework to leverage more complex data and perform effective semisupervised learning. We further provide new variants of the GAN framework and develop several new learning methods for implicit generative models.