AI for Science seminar with Kirill Neklyudov, University of Montreal.
Overview
- Date:Starts 10 April 2025, 15:00Ends 10 April 2025, 16:30
- Seats available:40
- Location:
- Language:English
Zoom password: ai4science

The on-site event will be followed by fika in the Analysen coffee area (fika from 16:00-16:30).
Abstract:
The classical paradigm of generative modeling, as the problem of reproducing the training data distribution, becomes less relevant for many applications, including drug discovery and text-to-image generation. In practice, generative models demonstrate the best performance when tailored to specific needs at inference time.
I will present two novel approaches that allow the control of the distributions of generated samples at inference time.
Superposition of Diffusion Models (SuperDiff) combines pretrained diffusion models to sample from a mixture of distributions (logical OR) or to generate samples that are likely under all models (logical AND). SuperDiff leverages a new scalable Itô density estimator for the log-likelihood of the diffusion SDE, which incurs no additional overhead compared to the well-known Hutchinson’s estimator needed for divergence calculations.
In the second half of the talk, I will present an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models. We derive a weighted simulation scheme, which we call Feynman-Kac Correctors (FKCs), based on the celebrated Feynman-Kac formula by carefully accounting for terms in the appropriate partial differential equations (PDEs).
Finally, I'll demonstrate different applications of these methods, varying from the classical image generation tasks to molecule design and sampling from Boltzmann densities.
About the speaker:
Kirill Neklyudov is an Assistant Professor at the University of Montreal and a Core Academic Member at Mila - Quebec AI Institute, developing novel methods in generative modelling, Monte Carlo methods, Optimal Transport, and applying those to solve fundamental problems in natural sciences, e.g. finding eigenstates of the many-body Schrodinger equation, simulating molecular dynamics, predicting the development of biological cells, conformational sampling, and protein folding.
Previously, he did two postdocs: at Vector Institute with Alán Aspuru-Guzik and Alireza Makhzani; at the University of Amsterdam with Max Welling.

Structured learning
This theme focuses on how to make use of structure in data to build machine learning (ML) and artificial intelligence (AI) systems which are safer, more trustworthy and generalize better. Structure includes the relationship between data, in time and space, and how the predictions change when data is transformed in specific ways, for example rotated or scaled. These topics are abstract and general but have a direct impact on the use of AI and ML in the sciences and in applications such as drugs and materials design, or medical imaging.