Course syllabus for Deep machine learning

The course syllabus contains changes
See changes

Course syllabus adopted 2022-02-08 by Head of Programme (or corresponding).

Overview

  • Swedish nameDjup maskininlärning
  • CodeSSY340
  • Credits7.5 Credits
  • OwnerMPSYS
  • Education cycleSecond-cycle
  • Main field of studyAutomation and Mechatronics Engineering, Computer Science and Engineering, Electrical Engineering, Software Engineering, Biomedical engineering
  • DepartmentELECTRICAL ENGINEERING
  • GradingTH - Pass with distinction (5), Pass with credit (4), Pass (3), Fail

Course round 1

  • Teaching language English
  • Application code 35120
  • Maximum participants250
  • Block schedule
  • Open for exchange studentsYes

Credit distribution

0117 Project 3 c
Grading: TH
3 c
0217 Written and oral assignments 4.5 c
Grading: TH
4.5 c

In programmes

Examiner

Go to coursepage (Opens in new tab)

Eligibility

General entry requirements for Master's level (second cycle)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Specific entry requirements

English 6 (or by other approved means with the equivalent proficiency level)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Course specific prerequisites

Students should have working knowledge of basic probability, linear algebra and programming. It is desirable to have basic knowledge in statistics and learning, corresponding to, e.g., ESS101 Modelling and simulation, SSY230 System identification or TDA231 Algorithms for machine learning and inference, but it is not a strict requirement.

Aim

The purpose with this course is to give a thorough introduction to deep machine learning, also known as deep learning or deep neural networks. Over the last few years, deep machine learning has dramatically changed the state of the art performance in various fields including speech-recognition, computer vision and natural language processing. We focus primarily on basic principles regarding how these networks are constructed and trained, but we also cover many of the key techniques used in different applications. The overall objective is to provide a solid understanding of how and why deep machine learning is useful, as well as the skills to apply them to solve problems of practical importance.

Learning outcomes (after completion of the course the student should be able to)

After the course, students should be able to:
  • explain the fundamental principles of supervised learning, including strategies to use validation data to avoid overfitting
  • describe the standard cost functions optimised during supervised training (in particular the cross entropy) and the standard solution techniques (stochastic gradient descent, back propagation, etc)
  • explain how traditional feed-forward networks are constructed and why they can approximate “almost” any function (the universality theorem)
  • understand the problem with vanishing gradients and modern tools to mitigate it (e.g., batch normalisation and residual networks)
  • summarise the key components in a convolutional neural networks (CNNs) and their key advantages
  • describe common types of recurrent neural networks (RNN) and their applications
  • summarise how transformers are constructed and describe their key properties
  • provide an overview of some of the many modern variations of the deep learning networks
  • argue for the benefits with transfer learning, self-supervised learning and semi-supervised learning when we have a limited amount of annotated/labeled data
  • train and apply CNNs to image applications and RNNs or transformers to applications related to time sequences
  • use a suitable deep learning library (primarily PyTorch) to solve a variety of practical applications

Content

  • supervised learning by cross entropy minimisation combined with evaluations on validation data
  • back propagation and stochastic gradient descent
  • a suitable programming language for implementing deep learning algorithm
  • feedforward neural networks and convolutional neural networks
  • recurrent neural networks
  • the transformer architecture
  • techniques for efficient training such as momentum and batch normalisation
  • modern variations of neural networks (e.g., attention and residual networks)
  • self-supervised learning and semi-supervised learning
  • application of convolutional neural networks on image recognition and transformers for sequential problems

Organisation

The course comprises on-line lectures (to watch before the class), active learning sessions (where we review material from the corresponding lecture), home assignments, a project and tutorial sessions (primarily related to the home assignments).

Literature

We mainly use Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, 2016, which is available online http://www.deeplearningbook.org.

Examination including compulsory elements

There is no written exam in this course. Instead the students are evaluated individually based on their performance in the different activities in the course; more specifically, the grade is obtained by weighting the results on hand-ins, project and the degree of attendance.

The course examiner may assess individual students in other ways than what is stated above if there are special reasons for doing so, for example if a student has a decision from Chalmers on educational support due to disability.

The course syllabus contains changes

  • Changes to course rounds:
    • 2022-04-29: Open for exchange students Changed to open for exchange students by PA
      [Course round 1] Changed to open for exchange students