I spent the last week of March visiting the Institute of Seismology and Volcanology at the University of Hokkaido in Sapporo, finishing off a paper that was submitted to a special issue of the Bulletin of the Seismological Society of America (BSSA) on the Tohoku-oki tsunami of 11 March 2011. It is now posted here.
This work started last summer when Bre MacInnes was still at UW working with me on an NSF RAPID grant to perform simulations of this tsunami using our GeoClaw code. She’s now a postdoc in Sapporo and together with Adit Gusman (another postdoc there) and Y. Tanioka (director of the Institute), we compared 10 proposed sources for the tsunami by simulating the resulting wave using GeoClaw and comparing with observations. Each source is a model of the seafloor deformation that caused the tsunami. Dozens of similar (but far from identical) source models have been developed by different research groups, by performing some form of inversion based on some combination of seismic, tsunami, and GPS data. Some sources model the slip along the fault plane, 10s of kilometers below the seafloor, which we converted to seafloor deformation using the Okada model (based on a Green’s function solution to the equations of elastic deformation in a half space). Other groups directly modeled the seafloor deformation based primarily on tsunami data.
For each earthquake source, we compared results at 4 DART buoys (pressure gauges on the ocean floor) near Japan and in four locations along the coast, including the Sendai Plain and 3 communities further north on the Sanriku Coast. We found that the sources developed using tsunami data typically performed the best for tsunami modeling, perhaps not surprisingly. Many of the simulated results agreed quite well with observations. However, none of the sources gave good agreement everywhere and it remains to sort out what source is best for tsunami modeling. Of course the numerical model has its own limitations, in particular that the shallow water equations are solved, a good approximation in many cases but not perfect by any means and it would be interesting to study some of the differences between observation and simulation in more detail.
For some information on how the GeoClaw code has been validated on a number of test problems, and other papers on tsunami modeling and the software more generally, see the GeoClaw webpage.
In addition to the figures in the paper, we also created an electronic supplement of additional figures along with most of the raw data used in the simulations and comparisons. Until a few years ago, I had never worked with data from the real world (like most mathematicians) and it has been an interesting experience learning to work with large data sets, and thinking about the best way to archive and curate this data. Reproducibility in computational science research is one of my big interests these days, and this paper was a good case study for this. The journal BSSA encourages the submission of electronic supplements, and I found that the webpage format they provided was easy to work with. We’re also collecting all the simulation and data analysis codes in a Github repository.
The 2012 [HPC]^3 workshop at KAUST just ended. It’s been a busy week with lots of exciting developments — this was a workshop with an emphasis on work. Each morning there were a few talks, but the afternoons were devoted to working in groups developing new software capabilities, much of it related to Clawpack and in particular the PyClaw suite of software. This was the second such workshop at KAUST and some of the projects were continuations of things started at the 2011 workshop.
The title [HPC]^3 refers to 3 different interpretations of the acronym HPC. The full title is High Performance Computing and Hybrid Programming Concepts for Hyperbolic PDE Codes.
The workshop program is on the web and eventually videos and slides from the talks will also appear, so I won’t say too much about these here. Instead I’ll mention a few of the highlights and accomplishments of the working groups. Eventually a final report from each group will be posted. For now you can find out more about what went on in some groups on the workshop wiki.
- The AMR group (Donna Calhoun, Tobias Weinzierl, Carsten Burstedde, Kristof Unterweger, Amal Alghamdi, Qi Tang, Kyle Mandli, and Marsha Berger via Skype) made progress on two different approaches to incorporating tree-based adaptive mesh refinement into PyClaw, one based on p4est and the other on Peano.
- The Manycore group (George Turkiyyah, Nathan Bell, Andy Terell, Garune Ohannessian, Rio Yokota) made it possible to call CUDAClaw from Python and started re-engineering it to simplify other manycore implementations, including OpenCL, TBB, and ISPC.
- The implicit group (Jed Brown, Matteo Parsani, Lulu Liu) did some interesting work on using preconditioners for hyperbolic systems that equalize the wave speeds and on an implementation of downwind WENO for Runge-Kutta stages with negative coefficients.
- The Discontinuous Galerkin group (James Rossmanith and Scott Moe) wrapped DoGPack in Python and got it working with VisClaw. They started planning how to more fully incorporate DG into a DoGClaw code.
- Matt Emmett started working on applying WENO on mapped grids, and has a working code in 1d.
- The geosciences group (Chris Kees, Marc Hesse, Robert Weiss) worked on Clawpack implementations of two problems: two-phase flow in porous media with an interesting nonconvex and multimodal flux function, and sediment transport with an eye towards tsunami deposits. The final presentation also had a nice demo of the new IPython Notebook.
- The visualization and steering group (Madhu Srinivasan, Chris Knox, Atanas Atanasov, Bruce D’Amora) made progress on several fronts: improvement in the HDF5 PyClaw output routines and conversion of this output to xdmf form, starting to add VisIt and Paraview capabilities into VisClaw using this form, and coupling on-the-fly visualization into PyClaw so that the solution can be plotted during computation rather than as postprocessing. The ultimate goal is to use this to help steer the computation.
- A documentation sprint one evening got us started working on using Sphinx to better document each example and produce a better set of galleries of sample results. Yiannis Hadjimichael and Amal Alghamdi fixed many documentation pages to reflect recent changes to the code, and got the doctests working.
Thanks to everyone who participated and worked so hard, often late into the night. Thanks to my co-organizers and the other plenary speakers who moved between groups and helped out with many issues. And a special thanks to [David K]^2 (Ketcheson and Keyes) for helping create the wonderful environment at KAUST for computational science and multi-cultural interaction (including elliptic/hyperbolic and FEM/FV!), and for the financial support of the workshop and participants.