Seminar
The event has passed

Multiplicative Amplification in Deep Neural Networks

Networking event organised by the CHAIR theme AI for Scientific Data Analysis. Lunch is included.

Speaker: Prof. Bernhard Mehlig.

Overview

The event has passed
  • Date:Starts 2 October 2024, 12:00Ends 2 October 2024, 13:00
  • Location:
    PJ, Physics Building Origo, Campus Johanneberg
  • Language:English
  • Last sign up date:26 September 2024
Registration (Opens in new tab)

Abstract, Bernhard Mehlig

How do neural networks learn? This can be analysed and understood, in part, using concepts from dynamical-systems theory [1].

For deep neural networks, the maximal finite-time Lyapunov exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems such as turbulent flow. Ridges of large positive exponents divide input space into different regions that the network associates with different classes in a classification task. The ridges visualise the geometry that deep networks construct in input space, and help to quantify how the learning depends on the network depth and width [2].

References

[1] Bernhard Mehlig, Machine Learning with neural networks, Cambridge University Press (2021)

[2] Storm, Linander, Bec, Gustavsson & Mehlig, Finite-time Lyapunov exponents of deep neural networks, Phys. Rev. Lett. 132 (2024) 057301

 

AI for Scientific Data Analysis

This theme is about utilizing the power of AI as a tool for scientific research. AI can be applied to, and potentially speed up, discovery and utilization in a variety of research disciplines, such as microscopy, physics, biology, chemistry, and astronomy.