2024/2025
Bayesian Methods for Machine Learning
Type:
Mago-Lego
Delivered by:
Big Data and Information Retrieval School
When:
2, 3 module
Online hours:
60
Open to:
students of one campus
Language:
English
ECTS credits:
6
Contact hours:
14
Course Syllabus
Abstract
Bayesian methods in machine learning are based on the so-called Bayesian approach for statistics, one of the possible ways to conduct mathematical reasoning under uncertainty. In application to ML models Bayesian methods allow to consider user preferences when building decision rules for prediction and make this efficient. In addition, solving problems of selecting models’ structure parameters (number of clusters, coefficient of regularization etc) becomes possible without full combinatorial search. This 6-week course is an introduction to Bayesian Methods. At the same time, it covers the most important topics in practice.
To complete the course, students are supposed to have skills in basic mathematical courses (calculus, linear algebra), probability theory, programming in Python and basic machine learning models.
Learning Objectives
- After taking this course, students should be able to: - Build complex probabilistic models taking into account structure of practical machine learning task - Perform inference in built probabilistic models - Implement efficient versions of these models on computer
Expected Learning Outcomes
- Practice with Bayesian models and inference
- Learn about efficient inference for conjugate distributions
- Learn about latent variable models, in particular, Gaussian Mixture model
- Derive EM algorithm and its application for maximal likelihood estimation
- Practice with Variational Inference
- Learn about topic modeling, Latent Dirichlet Allocation and inference in it
- Observe Monte Carlo methods for sampling and estimation
- Learn about application of MC methods for LDA and Bayesian Neural Networks
- Learn about Gaussian processes and their application for machine learning
- Observe Bayesian optimization with examples of usage
- Practice with PyTorch framework
- Learn about neural networks, optimization of their parameters and usage for machine learning tasks
- Learn about Variational Autoencoder and its usage for modeling distribution of images
Course Contents
- Introduction to Bayesian Methods and Conjugate priors
- Expectation-Maximization algorithm
- Variational Inference and Latent Dirichlet Allocation
- Markov chain Monte Carlo
- Gaussian processes and Bayesian optimization
- Neural Networks and Variational Autoencoder
Assessment Elements
- QuizzesEvery week contains graded quizzes, devoted to covered materials
- SGAThere are two staff-graded assignments in this course. First is theoretical, corresponds to week 1 about Bayesian Inference and conjugate distributions. Second is placed in week 3 and is a manually checked programming assignment about Latent Dirichlet Allocation.
- Final projectThis is basically another staff-graded assignment, but we expect you to show all your skills which you have mastered during the course. Final project is devoted to Variational Autoencoders.
Bibliography
Recommended Core Bibliography
- Barber, D. (2012). Bayesian Reasoning and Machine Learning. Cambridge: Cambridge eText. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsebk&AN=432721
- Murphy, K. P. (2012). Machine Learning : A Probabilistic Perspective. Cambridge, Mass: The MIT Press. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsebk&AN=480968
Recommended Additional Bibliography
- Christopher M. Bishop. (n.d.). Australian National University Pattern Recognition and Machine Learning. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsbas&AN=edsbas.EBA0C705