General Information

The Second Conference on Random Matrix Theory and Numerical Linear Algebra (RMT + NLA II) will take place June 16-20, 2025 at the University of Washington in Seattle. Based on the successful RMT + NLA conference, a second instatiation again will again bring together researchers from random matrix theory, numerical linear algebra, mathematical physics, randomized algorithms, high-dimensional statistics, optimization and theoretical computer science to discuss recent developments. A specific philosophy of this conference series is that we are particularly interested in having attendees and speakers from one area who may have less exposure to other areas.

If you plan on at attending, please fill out the registration form.

We would like to thank the NSF for support via grant NSF-DMS-2306438/2306439 and thank the Department of Applied Mathematics at the University of Washington for staffing and organizational support.

Schedule Posted

Posted by Tom Trogdon on April 24, 2025
The conference schedule has been posted here.

The titles and abstracts of our talks and poster presentations are also available.

Minicourse Update

Posted by Tom Trogdon on March 17, 2025
Our three minicourses are:

Giorgio Cipolloni: Wigner matrices: A toy model for Quantum Chaos

Abstract: We will begin with discussing Quantum Unique Ergodicity (QUE) and of the Eigenstate Thermalization Hypothesis (ETH), which are fundamental signatures of quantum chaos. However, not much is known about QUE/ETH mathematically for generic quantum systems.

Instead, we study a probabilistic version of these concepts using Wigner matrices, which are more tractable mathematically. The main input to prove QUE/ETH for Wigner matrices are the recently developed multi-resolvent local laws, i.e. compute the deterministic approximation of products of resolvents.

Towards the end of the course, we will discuss some applications of these results/techniques as well as more physically relevant open problems.

Govind Menon: The geometry of the deep linear network

Abstract: The deep linear network is a matrix model of deep learning. It models the effect of overparameterization for the construction of linear functions. Despite its simplicity, the model has a subtle mathematical structure that yields interesting insights into the training dynamics of deep learning.

We explain a (matrix) geometric perspective for the analysis of the DLN. The heart of the matter is an explicit description of the underlying Riemannian geometry. The use of Riemannian geometry provides unity with the theory of interior point methods for conic programs, and it is helpful to contrast the gradient flows that arise in each setting.

Michael W Mahoney: Random matrix theory and modern machine learning

Abstract: Random matrix theory is a large area with a long history, with elegant theory and a wide range of applications. However, the challenges of modern machine learning are forcing us not only to use random matrix theory in new ways, and also to chart out new directions for theory. In this series of presentations, we'll cover several aspects of these developments. This includes challenges in training machine learning models, inference in overparameterized models, diagnostics where heavy-tailed distributions are ubiquitous, and computational-statistical tradeoffs in randomized numerical linear algebra. Addressing these challenges leads to new directions for theory: phenomenology and semi-empirical theory to characterize performance in state-of-the-art neural networks without access to training or testing data; high-dimensional linearizations and deterministic equivalents to go beyond eigenvalue distributions of linear models; very sparse embeddings to perform "algorithmic gaussianization" to speed up core numerical linear algebra problems; new random matrix models that have heavy-tailed spectral structure without having heavy-tailed elements; and using "free compression" ideas in reverse to compute high-quality spectral distributions of so-called impalpable matrices (for which we cannot form or even evaluate with full matrix-vector products).

Minicourses

Posted by Tom Trogdon on January 31, 2025
We are pleased to announce that Giorgio Cippoloni, Michael Mahoney and Govind Menon will be presenting minicourses. Further details will be forthcoming.

Details concerning funding and registration are also posted.

Contacts

Tom Trogdon
trogdon@uw.edu
Xiucai Ding
xcading@ucdavis.edu

Confirmed presenters