Research

Black-box tests for algorithmic stability

Algorithmic stability expresses the idea that small perturbations to the training data should result in small changes to the model output. It is closely related to the generalization behavior and is often the key sufficient condition for the validity of many distribution-free predictive inference methods. Therefore, given a black-box algorithm, one may hope to verify it is stable from the data.

In this work, we study this problem and argue that, in general, black-box testing of stability is hard based on limited amounts of data.

Click to see the manuscript on arXiv

Two-sample inference for high-dimensional Markov networks

In many scientific experiments, the data are collected from two groups—say, the control and the treated—and the goal is to understand how the two groups differ. Differential network analysis can be a useful tool for such data. A differential network is a graph such that a pair of nodes are connected by an edge if and only if the variables they index have different conditional dependence relationships across the two groups, either because the variables are conditionally dependent in one but not in the other, or because the strength or the direction of the relationship has changed. It is customary to assume that the data distributions are known up to their parameters, but the problems are often high-dimensional, making accurate yet powerful inference a challenge.

In this work, we propose procedures for estimating differential networks with valid uncertainty quantification for a pair of samples from a general parametric family of Markov networks. Our approach brings together direct difference estimation, de-biasing, and bootstrap testing. Direct difference estimation refers to the approach of re-expressing the problem directly in terms of the difference of distributional parameters; such re-parametrization is advantageous in high-dimensional settings where some sort of regularization is necessary. De-biasing makes each component estimator more Gaussian and easier to work with. Finally, an estimate of the differential network structure is obtained by thresholding the de-biased regularized estimate; here, estimating the threshold with bootstrap is expected to increase power, as the threshold can adapt to the correlation structure. We show that our procedures are valid under weaker assumptions than those typically necessary for the procedures that do not go through with de-biasing. Moreover, the error level is set by the user so that uncertainty quantification is transparent.

Click to see the paper in Journal of the Royal Statistical Society: Series B, 83(5)

Predictive inference is free with the jackknife+-after-bootstrap

Ensemble methods are popular for their excellent performance in many practical tasks, but they tend to be computationally intensive and difficult to analyze. While any of the existing distribution-free methods—such as conformal prediction or more recent variants it has inspired—can be used to obtain prediction intervals with marginal coverage guarantee at finite sample sizes and without requiring distributional assumptions, they either do not make full use of the data or come with a hefty computational cost, reducing their appeal.

In this work, we propose a wrapper method for constructing prediction intervals that can be integrated with most ensembles deployed in practice. Inspired by the jackknife+ and out-of-bag error estimation technique, our method uses the data efficiently without incurring additional model fitting costs beyond what is already necessary for the ensemble prediction itself. In contrast to similar methods using out-of-bag calibration, we are also able to guarantee a distribution-free lower-bound on the coverage, but under the rather curious requirement that the number of models in the ensemble is random. This randomness requirement can be dropped for stable ensembles, such as those that arise from bagging bounded predictors.

Click to see the paper in Advances in Neural Information Processing Systems 33

Enhancement-constrained acceleration
A robust reconstruction framework in breast DCE-MRI

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) refers to the acquisition of MRI images following the introduction of a contrast agent to a tissue of interest; it is often used to diagnose many types of cancer. While it is desirable to have images that are high in both spatial and temporal resolutions, the nature of MRI makes this difficult.

In this work, we propose a new method for reconstructing images at high temporal resolutions while preserving important spatial details, assuming that each voxel evolves smoothly as a function of time. Typically, each stationary MRI image is reconstructed from the k-space (i.e., the spatial frequency domain) data via the inverse discrete Fourier transform (IDFT). The implicit assumption is that the object being imaged is stationary throughout the duration of each sweep through k-space, which can take over 1 second in clinical scanners; this assumption may be unrealistic for diagnostic applications. However, if the changes in each voxel are smooth, it is possible to share information across different time points by introducing a penalty that encourages smoothness. We demonstrate that our method can produce images that faithfully capture details unfolding on a faster timescale, outperforming some popular baselines.

Click to see the paper in PLOS ONE, 16(10)