Michael McNeil Forbes
Fellow
Research Assistant Professor, Institute for Nuclear Theory
Research Interests
Bio
Teaching
Many-body Theory
Our universe is an incredible place. Despite its incredible diversity and
apparent complexity, an amazing amount of it can be described by relatively
simple physical laws referred to as the Standard Model of
particle physics. Much of this complexity “emerges” from the
interaction of many simple components. Characterizing the behaviour of
“many-body” systems forms a focus for much of my research.
Unfortunately, “many” typically begins with three! With the help
of computers, “many” can be extended to four or five if
“exact” solutions are desired. Approximate methods such as
ab-initio Monte-Carlo techniques can extend this to a few hundred, and
classical particle dynamic simulations have pushed this to trillions,
however, this falls well short of
Avogadro’s
Number of molecules found in a typical macroscopic sample of matter.
A solution to the “many-body” problem is imperative for
fundamental physics to systematically predict and explore new properties of
matter without requiring serendipitous experimental discovery. Many cases
preclude experiments altogether (consider stellar matter) and a practical
many-body solution is the only recourse.
Finding methods to study fermionic many-body systems has been the underlying
focus for much of my research. QCD is a rich fundamental theory, and
mean-field analyses used here have been a source of inspiration for novel
phases in other many-body systems. Dense quark matter may also have
observable consequences -- topological defects such as vortices, for example,
could play an important role in explaining puzzling neutron star phenomena
like glitches and kicks -- and may even hold the key to understanding
baryogenesis and dark matter. A problem with these possibilities is that they
lie in regimes of strong-coupling where theoretical techniques are at best
qualitative, and sometimes misleading.
Fortunately, the relevant physics of interacting fermions appears in many
other contexts, including cold atoms, superconductors, and nuclei. Cold-atom
experiments provide a controllable environment for studying the simplest of
these systems, and have already demonstrated that many
“mean-field” models are insufficient for understanding novel
states of fermionic matter. One of my long term research objectives is to
develop and benchmark quantitative techniques for modelling
strongly interacting fermions, starting with the simplest cold-atom systems,
and generalizing to nuclear matter. (This was the focus of
our 2011
INT program.) For example, cold-atom systems have established that
variational and QMC calculations provide a useful description of highly
polarized matter.
My immediate research objectives center around density functional theory
(DFT) related techniques. The DFT framework provides a computationally
accessible tool for studying the static and dynamic properties of macroscopic
systems. It provides a bridge between systems of finite size and the
macroscopic thermodynamic limit, allowing for a synthesis of calculable and
measurable properties, such as ab initio QMC calculations of
few-body systems, to be used to quantitatively model more complex systems.
The result is a framework for computing static and dynamic properties of
many-body systems far beyond the reach of ab initio
techniques.
After having established the existence of partially polarized phases in the
unitary Fermi
gas cond-mat/0606043,
we developed an asymmetric DFT to quantitatively describe polarized
systems 0808.1436, and
predicted the existence of a fermionic supersolid LOFF phase
0802.3830. Present
experiments do not allow this phase enough physical space to develop, but we
are working with experimentalists to optimize trap designs and identify
signatures that will maximize the likelihood of observing this intriguing
phase.
In addition to modeling and predicting new properties of these partially
polarized phases, the cold atom approach allows us to investigate the
strengths and deficiencies of the DFT. We found that the current DFT -
coined the superfluid local density approximation (SLDA) - unifies all
ab initio results for homogeneous
matter 1205.4815 therefore,
using trapped systems, I am able to study the DFT gradient terms, and
predict universal constants describing static low-energy behaviour.
Fermionic Dynamics
A related research objective is to investigate the dynamic properties of
strongly interacting fermionic matter using DFT. This provides one of the
only avenues to quantitatively answer questions about properties of nuclear
matter in the crust of neutron stars. In particular, in collaboration with
Rishi Sharma, we have found that a simplified DFT (known as an extended
Thomas-Fermi (ETF) model) can quantitatively model low-energy superfluid
dynamics, reproducing dynamical fermionic TDDFT simulations at a fraction of
the computational cost. Modern supercomputing resources thus open a door to
solve the problem of neutron star glitches, which has remained outstanding
for more than fifty years since Anderson and Itoh proposed vortex pinning on
nuclei in the crust. To date, even the sign of the vortex-nucleus
interaction is unknown.
DFT will allow us to tackle this problem: First we will use a fermionic TDDFT
for nuclear matter to calculate the sign and magnitude of the vortex-nucleus
interaction. Then we will use this to tune an ETF model, and scale
up the calculations, simulating the collective behaviour of multiple
vortices to see if pinning can indeed explain observed glitching events.
Long-term goals involve developing robust quantitative tools for modeling
static and dynamic properties of Fermi systems, including atomic clouds,
heavy nuclei, and neutron star matter. Outstanding challenges include:
identifying dynamical signatures of new phases (such as the LOFF state)
through realistic simulation of experiments (requiring large simulation
volumes); describing thermalization and entropy production in a
computationally tractable manner (TDDFT conserves entropy -- stochastic
extensions are needed to describe entanglement and thermalization associated
with level crossings); and modelling nuclear reactions.
Dark Matter
Another of my research objectives is to use observational data to confirm or
rule out our proposal that much of the diffuse radiation from the core of our
Galaxy results from dark matter in the form of quark antimatter nuggets
(arXiv:astro-ph/0611506
and arXiv:0804.3364). If
confirmed, this would provide a unified explanation for both baryogenisis and
dark matter, and give means of directly observing dark matter in our
Galaxy. In addition, these nuggets would forge a new connection
between QCD
and astrophysics, as they would be an additional realization of high
density QCD
matter.
The basic idea stems from the observation that there is only five times more
dark matter than baryonic matter: this suggests that they may both originate
from the
same QCD
scale. (If they originated from very different scales, then one has an
additional “naturalness” problem of explaining why the observed scales differ
only by a factor and not by many orders of magnitude.) We propose that the CP
violation required for baryogenisis arises from strong CP violation, and that
the physics invoked to solve the corresponding strong CP problem (currently
understood to be an axion field) aids in the formation of the nuggets. The CP
violation drives a slight excess of antimatter nuggets, resulting in an
excess of free baryons (the matter in our universe) and a collection of
tightly bound matter and antimatter nuggets comprising the dark matter.
The novel observational consequences of this “dark antimatter” stem from the
annihilation of electrons and protons on the antimatter nuggets. This will
produce observable radiation from the core of the galaxy, providing a means
of testing and/or refuting this idea. (There are many other cosmological
constraints on our proposal, but these are naturally satisfied and do not
provide as tight a verification as do the galactic emissions.) Presently, the
model provides a natural explanation of three previously unrelated sources of
diffuse galactic emissions spanning over 12 orders of magnitude in energy:
the 511 keV, ~10 keV x-ray emissions, and the so-called
microwave WMAP-haze. Without
a single extra parameter,
the WMAP-haze is correctly explained
at a scale some 10 orders of magnitude smaller than the scale at which we fix
the parameters of the model.
To quantitatively formulate this proposal, input
about QCD
at high densities, high temperatures, and in a background of strong CP
violation will be needed. This will require advances in the
lattice QCD
program, and could provide new observational input into the properties
of QCD in
extreme conditions.
While researching the consequences of these nuggets, I have realized that
there remain several outstanding issues regarding diffuse electromagnetic
emissions from the core of our galaxy. If our proposal is refuted, these
puzzles remain, and could provide important clues about the nature of dark
matter or other physics beyond the standard model. A medium term goal of mine
is to try to understand the origins for some of these diffuse emissions to
see if they can shed some light on new physics.
Astrophysical observations such as these provide a complementary approach to
high energy experiments like LHC for discovering new physics. Whether or not
our specific proposal is true, the fact that it can not be easily ruled out
demonstrates several points: 1) there is room within the current set of
cosmological data for “conventional” physics to explain some outstanding
puzzles and, 2) a better understanding of conventional astrophysical
processes would greatly constrain the properties of any unconventional models
that attempt to explain or evade the observations.
Computational Physics
One of my future goals that I have not yet fully explored is to develop
large-scale computational approaches for solving physics. I have a very
strong interest in computational models, but have chosen to stay somewhat
clear of this during my training in order to get a better overall sense of
the relevant physical questions. While there are still many analytical
techniques, applying theory to complicated physical systems such as nuclear
structure and astrophysics will require various forms of modeling.
One side-goal is to develop computational techniques for solving
non-perturbative physics problems. In the near future, I am developing
techniques for efficiently solving fixed point equations that arise in
self-consistent calculations using, for example, modifications of the Broyden
algorithm and DVR basis expansions
(arXiv:0805.4446). These stem
directly from developing textsc{df} techniques for modeling Fermi gases, and
are used in collaboration with the UNEDF
project. In the medium term, this may include developing improved Monte Carlo
techniques for studying strong coupling physics, and using DF techniques to
bridge the gap between small systems accessible to Monte-Carlo techniques and
large systems accessible to experiment and thermodynamic approximations.
A long-term side goal is to help make advances in computer science accessible
to physicists, in particular, advances in computer languages and development
techniques. As the current trend of multi-core chips and parallel computing
clusters continues, it will become more important for physicists to be able
to easily develop high quality code that can take advantage of highly
parallel systems. Presently this requires physicists to laboriously
restructure their code using message passing techniques. I believe that much
of this burden can be lifted if proper languages and development tools are
employed, allowing physicists to concentrate more on the relevant physics
than on the details of how their codes will interact with the ever
increasingly complex computing devices.
More about my research
My academic family tree
|