The CDT Spring School in Mathematics of Random Systems 2025 is jointly organised by the Scuola Normale Superiore di Pisa and the EPSRC CDT in Mathematics of Random Systems. The Summer School will be held at the Scuola Normale. In addition to the lecture courses, there will be additional invited talks by guest lecturers and presentations by selected PhD students.
Event Timetable
The full timetable for the Spring School can be downloaded here - this document will be updated as particulars are confirmed.
Monday, 31st March
Registration and Lectures from 09:00 to 17:30
Tuesday, 1st April
Lectures from 09:00 to 17:30
Wednesday, 2nd April
Lectures from 09:00 to 16:00
Invited Lecturers
Professor Luigi Ambrosio (Scuola Normale Superiore, Pisa)
Geometry of Optimal Transport and applications to ordinary and partial differential equations
Professor Gabriel Peyré (Ecole Normale Supérieure, Paris)
Flows in Machine Learning: Sampling and Training Dynamics
Lecture Courses
Professor Luigi Ambrosio (Scuola Normale Superiore, Pisa) - Geometry of Optimal Transport and applications to ordinary and partial differential equations
The first part of the minicourse will deal with the foundations of the theory of Optimal Transport. Then, the geometric point of view pioneered by the work of Brenier, McCann and Otto will be introduced, with a view on remarkable applications to functional inequalities, PDE's and specifically gradient flows. The second part of the minicourse will show how ideas from Optimal Transport can be used to develop a good calculus in metric measure spaces and, if time allows, the theory of well posedness of ODE's in this context, developed in collaboration with Dario Trevisan, will be illustrated.
The first part of Prof Ambrosio's lecture notes can be read here.
You can download the second part of Prof Ambrosio's lecture notes here [zip].
Professor Gabriel Peyré (Ecole Normale Supérieure, Paris) - Flows in Machine Learning: Sampling and Training Dynamics
In this course, I will review how concepts from optimal transport can be applied to analyze seemingly unrelated machine learning methods for sampling and training neural networks. The focus is on using optimal transport to study dynamical flows in the space of probability distributions. The first example will sampling by flow matching, which regresses advection fields. In its simplest case (diffusion models), this approach exhibits a gradient structure similar to the displacement seen in optimal transport. I will then discuss Wasserstein gradient flows, where the flow minimizes a functional within the optimal transport geometry. This framework can be employed to model and understand the training dynamics of the probability distribution of neurons in two-layer networks. The final example will explore modeling the evolution of the probability distribution of tokens in deep transformers. This requires modifying the optimal transport structure to accommodate the softmax normalization inherent in attention mechanisms.