Course syllabus adopted 2021-02-10 by Head of Programme (or corresponding).
Overview
- Swedish nameAvancerad probabilistisk maskininlärning
- CodeSSY316
- Credits7.5 Credits
- OwnerMPICT
- Education cycleSecond-cycle
- Main field of studyElectrical Engineering
- DepartmentELECTRICAL ENGINEERING
- GradingTH - Pass with distinction (5), Pass with credit (4), Pass (3), Fail
Course round 1
- Teaching language English
- Application code 13117
- Block schedule
- Open for exchange studentsYes
Credit distribution
Module | Sp1 | Sp2 | Sp3 | Sp4 | Summer | Not Sp | Examination dates |
---|---|---|---|---|---|---|---|
0120 Project 7.5 c Grading: TH | 7.5 c |
In programmes
- MPDSC - DATA SCIENCE AND AI, MSC PROGR, Year 1 (compulsory elective)
- MPDSC - DATA SCIENCE AND AI, MSC PROGR, Year 2 (elective)
- MPICT - INFORMATION AND COMMUNICATION TECHNOLOGY, MSC PROGR, Year 1 (compulsory elective)
- MPICT - INFORMATION AND COMMUNICATION TECHNOLOGY, MSC PROGR, Year 2 (elective)
- MPSYS - SYSTEMS, CONTROL AND MECHATRONICS, MSC PROGR, Year 1 (elective)
- MPSYS - SYSTEMS, CONTROL AND MECHATRONICS, MSC PROGR, Year 2 (elective)
Examiner
- Alexandre Graell I Amat
- Full Professor, Communication, Antennas and Optical Networks, Electrical Engineering
Eligibility
General entry requirements for Master's level (second cycle)Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.
Specific entry requirements
English 6 (or by other approved means with the equivalent proficiency level)Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.
Course specific prerequisites
Working knowledge of probability and statistics and linear algebra.
Aim
This course delves into the connections between machine learning and probability theory and statistics. In particular, it will give a probabilistic viewpoint of machine learning problems. Probability theory can be applied to any problem involving uncertainty. In machine learning, uncertainty comes in many forms, e.g., the noise in the collected data, or uncertainty about the best prediction given some past data or in what is the best suited model to explain the data. The key idea behind the probabilistic framework to machine learning is that learning can be thought of as inferring plausible (probabilistic) models to describe data that one could observe from a system. Probabilistic models are able to make predictions and statements about observable data and are also capable to express the uncertainty of the predictions.The course will describe a wide variety of probabilistic models, suitable for a wide variety of data and tasks. It will also describe a wide variety of algorithms for inference and learning, when using such models. The goal is to present a unified view of machine learning through the lens of probabilistic modeling and inference.
In the course the students will also learn universal models/methods that are useful in probabilistic machine learning, but also in other areas.
Learning outcomes (after completion of the course the student should be able to)
- Explain the philosophy behind Bayesian inference
- Develop an inference algorithm using the principles of Bayesian decision theory and a given cost function
- Understand the connections between probability theory and machine learning
- Explain similarities and differences between probabilistic and classical machine learning methods
- Interpret and explain results from probabilistic machine learning
- Derive, analyze, and implement the probabilistic methods introduced in the course
- Understand how to apply several probabilistic models to data and determine the most suitable one for a given task
- Discuss and determine whether an engineering-relevant problem can be formulated as a supervised or unsupervised machine learning problem
Content
- Bayesian inference, probabilistic modeling of data
- Supervised learning: Bayesian linear regression
- Bayesian graphical models
- Monte Carlo techniques: importance sampling, Gibbs sampling, Markov Chain Monte Carlo
- Markov random fields, factor graphs
- Belief propagation, variable elimination
- Hidden Markov models
- Expectation propagation and variational inference
- Gaussian processes
- Unsupervised learning
- Generative adversarial networks and variational autoencoders: two methods for unsupervised learning,
- Probabilistic deep learning
Organisation
The course comprises lectures, weekly home assignments and tutorial sessions related to the home assignments.Literature
We will mainly use Christopher M. Bishop, "Pattern Recognition and Machine Learning", Springer, 2006Examination including compulsory elements
The final grade (TH) is based on scores from a project, quizzes, and a written exam. The project and the literature study are mandatory in the sense that they must be passed to pass the course.The course examiner may assess individual students in other ways than what is stated above if there are special reasons for doing so, for example if a student has a decision from Chalmers on educational support due to disability.