Research Overview

Universality and Average-Case Algorithm Runtimes

Let H > 0 be a positive-definite N × N matrix, and let x0 be an N-dimensional random vector. The power method to compute the largest eigenvalue of H iterates:

  1. yj = xj−1 / ‖xj−12
  2. xj = H yj
  3. μj = xj* yj

Then μj → λmax as j → ∞. With P. Deift, we considered H = X X*/M as a sample covariance matrix with independent entries having mean zero and variance one.

Theorem: There exists a distribution function Fβgap(t) and a constant c depending only on d such that the rescaled halting time converges in distribution. The distribution is identified as the limiting distribution of the inverse of the top gap in the real (β = 1) and complex (β = 2) cases, giving both universality and average-case behavior. Similar results hold for the inverse power method, the QR algorithm, and the Toda algorithm.

Gibbs Phenomenon in PDEs

Consider the initial value problem i qt = ω(−ix) q on ℝ × (0, T), with q(x, 0) = qo(x). When qo is piecewise continuous, with G. Biondini I found a short-time expansion computable numerically with high accuracy.

Animation showing the solution q(x,t) with step initial condition and dispersive operator omega(k) equals negative k cubed, as time approaches zero. The solution exhibits oscillatory overshoots near discontinuities.
Solution q(x,t) with step initial condition, ω(k) = −k³ as t ↓ 0.
Animation of a Fourier series partial sum approximation of a step function as the number of terms increases, showing classical Gibbs phenomenon overshoots at discontinuities.
Fourier series partial sum approximation as n → ∞.

Theorem: Let qn(x, t) solve i qt = (−ix)n q with step initial condition. Then the same Wilbraham–Gibbs constant governs overshoots and undershoots of the PDE solution near discontinuities.

Numerical Nonlinear Steepest Descent

Effective computation of the inverse scattering transform, orthogonal polynomials, Painlevé transcendents, and other nonlinear special functions is accomplished by combining Deift–Zhou nonlinear steepest descent with a numerical Riemann–Hilbert solver. This produces uniformly and spectrally convergent approximations over large parameter ranges.

Animation of a KdV equation solution computed via Riemann-Hilbert problems, with the Riemann-Hilbert contours displayed alongside the evolving solution surface.
KdV solution with Riemann–Hilbert contours displayed during computation.

Finite-Genus KdV

Riemann–Hilbert methods yield representations of finite-genus KdV solutions as an alternative to the classical theta function approach, by converting scalar functions on a hyperelliptic Riemann surface into vector-valued functions on a cut complex plane.

Static frame of a genus-two KdV solution showing a periodic wave profile evolving in time. Click to view full animation.
Genus-two KdV solution (click image for animation).

Instability of Spatial Solitons

Instabilities of spatial solitons in a (2+1)-dimensional nonlinear Schrödinger equation were studied via Hill's method, capturing growth rates in the snake, oscillatory snake, and oscillatory neck instability regions. These rates were experimentally verified by the group of M. Haelterman using a 2D waveguide array.

Static frame showing neck instability onset in a spatial soliton with periodic transverse modulations. Click to view animation.
Neck instability evolution (click for animation).
Static frame showing oscillatory snake instability in a spatial soliton with sinusoidal deformations. Click to view animation.
Oscillatory snake instability (click for animation).