Магистратура
2022/2023
Глубинные генеративные модели
Статус:
Курс по выбору (Магистр по компьютерному зрению)
Направление:
01.04.02. Прикладная математика и информатика
Когда читается:
2-й курс, 3 модуль
Формат изучения:
с онлайн-курсом
Онлайн-часы:
90
Охват аудитории:
для своего кампуса
Преподаватели:
Николенко Сергей Игоревич
Прогр. обучения:
Магистр по компьютерному зрению
Язык:
английский
Кредиты:
6
Контактные часы:
6
Course Syllabus
Abstract
Generative models in machine learning try to learn the entire distribution of inputs and learn to generate new instances from this distribution. Modern deep generative models draw pictures, write text, compose music, and much more—and that’s exactly what we will see in the course. We will begin with basic definitions and proceed through GANs, VAEs, and Transformers up until the latest state of the art research results. The course requires an understanding of basic machine learning and deep learning.
Learning Objectives
- The objective of this course is to learn generative models based on deep neural networks, starting from basic definitions and reaching the current state of the art in several different directions.
Expected Learning Outcomes
- to understand the difference between discriminative and generative models
- to understand the relation between naive Bayes and logistic regression
- to understand the concept of generative-discriminative pairs
- to understand the difference between various deep generative models
- to understand the structure of explicit density models from PixelCNN to WaveNet
- • understand the basic structure of GANs and the idea of adversarial training
- • understand various loss functions used in modern GANs, including LSGAN and WGAN
- • understand modern GAN-based architectures for high-resolution generation
- • understand the paired style transfer problem setting and its solutions (Gatys et al., pix2pix)
- understand the unpaired style transfer problem setting and its solutions (CycleGAN, AdaIN, StyleGAN)
- able to understand the idea of the latent space for a deep autoencoder-based model and sampling from it
- • understand the structure and training of variational autoencoders
- understand quantized versions of variational autoencoders
- • have a basic understanding of attention mechanisms in deep learning
- • understand the operations of a self-attention layer in Transformers
- • understand modern Transformers, including BERT and GPT families
Course Contents
- Introduction to generative models: motivation and the naive example.
- Deep generative models: general taxonomy and autoregressive models
- Generative adversarial networks I: introduction, basic ideas, loss functions in GANs
- GANs II: modern examples of GANs. Case study: GANs for style transfer.
- Variational autoencoders: from the basics to VQ-VAE
- Transformers: basic idea, BERT and GPT. Transformer + VQ-VAE = DALL-E
Interim Assessment
- 2022/2023 3rd module0.1 * test week 2-3 + 0.1 * test week 6 + 0.6 * Final Programming assignment + 0.1 * test week 1-2 + 0.1 * test week 4-5
Bibliography
Recommended Core Bibliography
- Goodfellow, I. (2016). NIPS 2016 Tutorial: Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.00160
- Integrating deep learning algorithms to overcome challenges in big data analytics, , 2022
Recommended Additional Bibliography
- Mescheder, L., Nowozin, S., & Geiger, A. (2017). Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.04722