General Information

The Second Conference on Random Matrix Theory and Numerical Linear Algebra (RMT + NLA II) will take place June 16-20, 2025 at the University of Washington in Seattle. Based on the successful RMT + NLA conference, a second instatiation again will again bring together researchers from random matrix theory, numerical linear algebra, mathematical physics, randomized algorithms, high-dimensional statistics, optimization and theoretical computer science to discuss recent developments. A specific philosophy of this conference series is that we are particularly interested in having attendees and speakers from one area who may have less exposure to other areas.

If you plan on at attending, please fill out the registration form. We may have a handful of speaker slots available and are actively soliciting poster submissions. Travel funding for junior researchers can also be applied for within the registration form.

We would like to thank the NSF for support via grant NSF-DMS-2306438/2306439 and thank the Department of Applied Mathematics at the University of Washington for staffing and organizational support.

Minicourse Update

Posted by Tom Trogdon on March 17, 2025
Our three minicourses are:

Giorgio Cipolloni: Wigner matrices: A toy model for Quantum Chaos

Abstract: We will begin with discussing Quantum Unique Ergodicity (QUE) and of the Eigenstate Thermalization Hypothesis (ETH), which are fundamental signatures of quantum chaos. However, not much is known about QUE/ETH mathematically for generic quantum systems.

Instead, we study a probabilistic version of these concepts using Wigner matrices, which are more tractable mathematically. The main input to prove QUE/ETH for Wigner matrices are the recently developed multi-resolvent local laws, i.e. compute the deterministic approximation of products of resolvents.

Towards the end of the course, we will discuss some applications of these results/techniques as well as more physically relevant open problems.

Govind Menon: The geometry of the deep linear network

Abstract: The deep linear network is a matrix model of deep learning. It models the effect of overparameterization for the construction of linear functions. Despite its simplicity, the model has a subtle mathematical structure that yields interesting insights into the training dynamics of deep learning.

We explain a (matrix) geometric perspective for the analysis of the DLN. The heart of the matter is an explicit description of the underlying Riemannian geometry. The use of Riemannian geometry provides unity with the theory of interior point methods for conic programs, and it is helpful to contrast the gradient flows that arise in each setting.

Michael W Mahoney: Random matrix theory and modern machine learning

Abstract: TBA

Minicourses

Posted by Tom Trogdon on January 31, 2025
We are pleased to announce that Giorgio Cippoloni, Michael Mahoney and Govind Menon will be presenting minicourses. Further details will be forthcoming.

Details concerning funding and registration are also posted.

Contacts

Tom Trogdon
trogdon@uw.edu
Xiucai Ding
xcading@ucdavis.edu

Confirmed participants