Course syllabus for Parallel computer architecture

Course syllabus adopted 2021-02-26 by Head of Programme (or corresponding).

Overview

  • Swedish nameParallell datorarkitektur
  • CodeEDA284
  • Credits7.5 Credits
  • OwnerMPHPC
  • Education cycleSecond-cycle
  • Main field of studyComputer Science and Engineering, Electrical Engineering, Software Engineering
  • DepartmentCOMPUTER SCIENCE AND ENGINEERING
  • GradingTH - Pass with distinction (5), Pass with credit (4), Pass (3), Fail

Course round 1

  • Teaching language English
  • Application code 86114
  • Block schedule
  • Open for exchange studentsYes

Credit distribution

0118 Project 1.5 c
Grading: TH
1.5 c
0218 Examination 4.5 c
Grading: TH
4.5 c
  • 17 Mar 2023 am J
  • 08 Jun 2023 pm J
  • 18 Aug 2023 am J
0318 Laboratory 1.5 c
Grading: UG
1.5 c

In programmes

Examiner

Go to coursepage (Opens in new tab)

Eligibility

General entry requirements for Master's level (second cycle)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Specific entry requirements

English 6 (or by other approved means with the equivalent proficiency level)
Applicants enrolled in a programme at Chalmers where the course is included in the study programme are exempted from fulfilling the requirements above.

Course specific prerequisites

The course DAT105 Computer architecture or equivalent is required. The course TDA383  Principles for Concurrent programming is recommended.

Aim

From 1975 to 2005, the computer industry accomplished a phenomenal mission: in 30 years, we put a personal computer on every desk and in every pocket. In 2005, however, mainstream computing hit a wall, and the industry undertook a new mission: to put a personal parallel supercomputer on every desk, in every home, and in every pocket. In 2011, we completed the transition to parallel computing in all mainstream form factors, with the arrival of multicore tablets and smartphones. Soon this "build out" of multicore will deliver mainstream quad- and eight-core tablets and even the last single-core gaming console will become multicore. For the first time in the history of computing, mainstream hardware is no longer a single-processor von Neumann machine.
Power and temperature have joined performance as first-class design goals. High-performance computing platforms now strive for the highest performance/watt. This course looks at the design of current multicore systems with an eye towards how those designs are likely to evolve over the next decade. We also cover the historical origins of many design strategies that have re-emerged in current systems in different forms and contexts (e.g., data parallelism, VLIW parallelism, and thread-level parallelism).

Learning outcomes (after completion of the course the student should be able to)

After completion of the course the student should be able to:

Knowledge and understanding
  • describe current approaches to parallel computing
  • explain the design principles of the hardware support for the shared memory and message passing programming models
  • describe the implementation of different models of thread-level parallelism, such as core multithreading, chip multiprocessors, many-cores or GPGPU
Competence and skills
  • implement synchronization methods for shared memory and message passing parallel computers
  • design scalable parallel software and analyze its performance
Judgement and approach
  • analyze the trade-offs of different approaches to parallel computing in terms of function, performance and cost

Content

From 1975 to 2005, the computer industry accomplished a phenomenal mission: in 30 years, we put a personal computer on every desk and in every pocket. In 2005, however, mainstream computing hit a wall, and the industry undertook a new mission: to put a personal parallel supercomputer on every desk, in every home, and in every pocket. In 2011, we completed the transition to parallel computing in all mainstream form factors, with the arrival of multicore tablets and smartphones.

Power and temperature have joined performance as first-class design goals. High-performance computing platforms now strive for the highest performance/watt. This course looks at the design of current multicore systems with an eye towards how those designs are likely to evolve over the next decade.

Organisation

The content is divided into several parts:
  • a review of fundamental concepts in computer architecture
  • basic multiprocessor designs for the message passing and shared memory programming models
  • interconnection networks, an essential component in chip multiprocessors and scalable parallel computer systems
  • how to correctly support parallel algorithms in shared memory hardware
  • last years' recent transition towards chip multiprocessors (also known as "multicores")
A common thread running through all content parts is a discussion of cost tradeoffs with respect to performance, power, energy, verifiability, programmability, and maintainability. A second unifying theme is the memory bottleneck, and the importance of efficient resource management.

The lectures are complemented with several exercise sessions. Via three lab assignments, participants learn how to develop software using models such as C++ threads and OpenMP, they develop and analyze synchronization algorithms, and they learn how to use performance analysis tools. The course also contains a written assignment in which the participants take the role of the computer architect who will survey and discuss solutions to a particular problem in the field of parallel computing.


Literature

See separate literature list.

Examination including compulsory elements

Written individual exam given in an examination hall, laboratory work, and multi-week written project conducted individually or in pairs.

The course examiner may assess individual students in other ways than what is stated above if there are special reasons for doing so, for example if a student has a decision from Chalmers on educational support due to disability.