In this video is a brief overview of the process and procedures I have been using to analyze soil samples from two community gardens and student collected samples from the MSL program. The process of slide creation and usage of a UV-vis analyzer are both shown. The data from this will be compared to standard derivative graphs to determine if iron oxides are present. This is important to us because this an indicator of pollution.
A PDF of the full poster is available.
For the past couple of years, I’ve been looking into ways to get my students to think about responsible conduct in science. I’ve been looking for short readings, but haven’t come up with much (though I’d appreciate any you may want to share in the comments!). But today, in catching up on old episodes of one of my favorite podcasts, Our Warm Regards, I heard a discussion that might just do the trick. In the episode “The Dangers of Doing Science in the Field“, regular host Jacquelyn Gill, visiting host Sarah Myhre, and guest Jane Willenbring have a wide-ranging discussion touching on field safety, sexual harassment, macho culture, and who gets to do science. All three climate scientists are women who had harrowing experiences in the field. Their first-hand stories are at the same time raw and personal as they are indications of deeper cultural problems in science and academia.
In yesterday’s lab meeting, students asked how I find out about new papers. This is the first installment of a series of posts with some ideas. These were inspired in part by Lateef Nasser’s Radiolab episode and Transom article about how he gets ideas for his stories (check those out for more inspiration!).
There are a few ways you can sign up for emails or other notifications when new papers that fit a particular search criterion come out. You could use these to search for papers by a particular author, papers that use a particular keyword in the title or abstract, or (using some tools) papers that cite a particular reference.
- Google Scholar Alerts is one of the simpler ones: it uses the Google Scholar search syntax (e.g. author:p-a-selkin to search for my papers), and sends you emails when new material comes up. Some suggested uses are on the Google Scholar Alerts help page.
- Web of Science (available through the UW Libraries website) can send you alerts, too. You’ll have to sign into Web of Science’s account system (in addition to signing in through UW’s library) by clicking the “Sign In” link at the top right of the Web of Science pages.
- Once you’ve run a search, a “create alert” button will appear on the left side of the search results web page, as in the image below. Click it to get email updates when new papers are published that fit your search criteria. There are “secret” tricks in Web of Science (see also this) that let you combine terms to find publications that are particularly relevant. Examples include “detrital zircon” AND himalaya and Archean NEAR (paleomagnet* OR paleointensity) Once you learn these tricks, this type of alert can be particularly useful to let you know about researchers whose work you weren’t familiar with. You can also search by author if you find a researcher in your field whose work you want to keep up with (see Networking, below).
- If you want to be alerted whenever a particular reference is cited, search for that reference and click the title: the “create alert” button appears on the right side of the page for that reference. The latter type of alert is useful if you find a fundamental reference in your field that everyone seems to cite. New papers will often cite classic work in their field.
- Journal Alerts: individual journals send out alerts when they come out with new articles. These usually include the whole table of contents, which can be quite extensive. Sign up through journal web pages, which you can access by searching for the journal’s name through the library catalogue. Some good options are Science and Nature, which are very general but have articles that often generate a lot of “buzz”; for general geo-related papers, try Geology and GSA Bulletin, Geochemistry, Geophysics, Geosystems, and Earth and Planetary Science Letters. For sedimentology/stratigraphy/paleo/climate-related papers, try Palaeogeography, Palaeoclimatology, Palaeoecology, Journal of Sedimentary Research, Sedimentology, and Marine Geology. For geophysics and paleomagnetism, try Earth Planets Space, Geophysical Journal International, and Journal of Geophysical Research. For mineralogy and petrology, try American Mineralogist, Canadian Mineralogist, and Journal of Petrology. You will find others as you read more.
I noticed this article in EOS recently (thanks to Jon Mound and Nick Swanson-Hysell on Twitter for the heads up), and thought I’d comment. Although I’m framing these as caveats, please don’t take the comments to be an attack on anyone, either the article’s author or the authors of the study it describes. I’m just trying to outline my thought process and the kinds of questions a paleomagnetist like me might have when I look at coverage of something in the popular press (which EOS is, sort of…).
Caveat 1: Be careful of the source. My first impulse when I see an article on paleomagnetism in EOS or another responsible publication is to look for the original study. As someone who works in this field, I want to know the details: how was the data analysis done? What dataset was used, exactly? How does it fit into the context of work that’s come before? (I can usually figure this out by myself, but maybe I’ve missed something.) Were there any checks to make sure the result is plausible versus being an illusion of how the data were processed or a bias in the dataset? The problem here is that the results reported in EOS were described in a talk, not a peer-reviewed paper. The talk was by Kirschner and co-authors at the European Geosciences Union (EGU) conference a few months ago. EGU is kind of analogous to the American Geophysical Union conference here. The abstract from the talk, which is all I have to go on, is here. There’s actually a lot in it that didn’t make it into the EOS article, but not the details I’m looking for. (Also, there were some other neat talks in that session at EGU that I wish I’d heard!)
This isn’t to say that science journalists should never write about talks – of course they should. Conferences are where we share current work in progress. But as a reader, when you see that a story is based on a talk, know that there are some questions about the work that might not be answered – or answerable – yet.
Caveat 2: The Data. Estimates of Earth’s magnetic field intensity are much harder to deal with than standard paleomagnetic data. This is in part because the intensity of a rock’s magnetization has a complicated relationship with the intensity of the magnetic field in which the rock was magnetized. You can, for example, collect samples from two basalts that cooled at the same time, and so were magnetized in the same field. The magnetizations that you measure from your two basalt samples might be vastly different for a number of reasons. For one thing, one basalt might have more magnetite in it than the other. (Titanium-bearing) magnetite is the mineral that is mostly responsible for the magnetic record in basalt. Alternatively, differently sized or shaped or aligned titano-magnetite particles may have led one of the basalts to record the magnetic field more efficiently than the other. Alternatively, one of the basalts may have had its magnetic record wiped clean (by being reheated, for example, or chemically changed), or may have been remagnetized by a lightning strike, or may just have lost part of its magnetic record by sitting around in changing magnetic fields for a long time (we call that “viscous decay”). Over the years, various techniques have been developed to screen for these effects, and in some cases to adjust for them. But techniques do matter, and sometimes applying the wrong techniques or not applying the proper adjustments may bias estimates of ancient magnetic fields.
This is relevant because in the EOS article, the study is described as using “all available data”. The PINT database contains published results from hundreds of studies that attempt to estimate Earth’s ancient magnetic field intensity. This includes some from studies in the 1950s that don’t check for many of the problems that we know exist, some results that use different kinds of screening techniques, different ways to estimate amounts of magnetic material or the efficiency of magnetization, and different corrections for the weird effects I described. So it’s sometimes difficult to compare data from one study – one data point in the PINT database – to another. The usual approach to comparing paleointensity estimates through time is to come up with a set of criteria (based on the type of intensity estimate, or on the number of checks for a sample’s “ideal” behavior, or on agreement between different types of estimates) and look at data that meet those criteria. In fact, that appears to be what the authors of the study in question did in addition to the analysis of the whole dataset (From the abstract: “Spectral analysis of all palaeointensity data and a quality-filtered dataset obtained from the palaeointensity database…”).
Now, even if you decided to compare only the same kind of estimates of magnetic field intensity through time, there would still be some issues with the data. For example, different rock types may require different checks or adjustments, or may respond differently to the same estimation technique… and those rocks may be more or less common through different parts of the geological time scale. Intensity estimates from one particularly time-constrained rock type, basaltic glass, are really only available for the past 180 million years. Basaltic glass records magnetic fields differently from basalt (mainly because the magnetic particles are much smaller, and glass cooled more rapidly than basalt), and different adjustments for cooling rate (if you agree with them, which not everyone does) may need to be applied. Geology matters!
Caveat 3: The analysis. OK, so even if the data are filtered so that they are all reliable and comparable in terms of technique, estimates of past magnetic field intensity are unevenly spaced in time. Not all rocks are appropriate to use in intensity experiments. The older the rocks are, the fewer usable ones there are that have withstood the ravages of weathering, metamorphism, reheating, lightning strikes, and tectonism. This makes for a dataset that unevenly samples the magnetic field through time. One of the assumptions of digital signal processing- which the authors of the study in question do – is that the data are (at least close to) evenly spaced in time. There are ways to get around this requirement, say by fitting a smooth curve to the data and re-sampling it – but those require some assumptions about the data and the underlying process. Because this study was reported in a talk, I’m not sure what those assumptions were. Nonetheless, any smoothing, fitting, or filtering done to the data could strongly influence the cyclic behavior that the authors describe. I’m not trying to imply that the study’s authors are signal processing newbies – they may very well know more than I do about it. I’m just highlighting a question that I’d ask if I were trying to evaluate the study.
By the way, more than the signal processing aspects, I’m interested in the authors’ work on the change in the distribution of intensity estimates through time. This isn’t mentioned in the EOS article, but is in the abstract – I’m not sure why. Here’s my summary: When we talk about a “distribution”, we’re imagining that the intensity estimates are like students’ grades in a class: a random bunch of numbers to an outsider. If you take all of those grades together, they fall within some range of a middle value, with some grades are more frequent than others. If you taught the class again the same way, but with a different set of students, the pattern of grades would probably look somewhat similar even though the specifics would be different. In statistical terms, the underlying pattern of numbers (grades or intensity estimates) is a probability distribution. The authors, in their abstract, note that they see a change in the probability distribution of intensity estimates around 1.3 billion years ago, ” coincident with the time that geologic and palaeogeographic evidence suggests the onset of quasiperiodic assembly and fragmentation of supercontinents.” It’s also within the time frame that recent work (if you believe it) has suggested that the inner core began to grow. So is the cause of the change external (tectonic-related), internal (inner-core-driven), or neither? I’m not sure, but it adds another intriguing possibility into the mix! (I guess that’s Caveat 4: The Interpretation.) The approach that the study’s authors take to look at changes in probability distributions isn’t described in the abstract, but it might work better than the signal processing approach with data that are unevenly scattered through time.
Caveat 5: Is it a New Result? The idea that tectonics influence Earth’s magnetic field has been popular at least since the late 1990s, when geodynamo simulations suggested that different patterns of heat flux at the core-mantle boundary – due to patterns of cold, subducting lithospheric slabs – could influence magnetic reversals (see Glatzmaier et al., Nature, 1999). Since then, there have been a number of studies looking for tectonic-related cycles in intensity data (see here and here for example; both studies’ authors are quoted in the EOS article). So the research problem isn’t new, but I’m not sure that anyone has succeeded at the signal processing approach.
I’m sure other people have other ideas or concerns about the study or the way it was reported in EOS. The things I’d like to know about the Kirschner et al. study represent my own perspective as someone who used to do this stuff. Maybe you have a different set of questions? Please comment below!
I’m moving my blog content to my faculty website for a few reasons. First of all, Science 304 is no longer just my lab. I now share it with Dan Shugar, of the WaterSHED Lab. Second, it will be easier for me to manage the WordPress software if I’m just taking care of one site instead of two. I’m hoping that this will light a fire under me to write those posts I’ve been talking about for months…
Lest you think all we do in my lab is mess around with magnets, I’m posting a few tweets with photos of today’s lab barbecue! Bonnie and I have an annual summer party for students, alums, and associates in our labs. Unfortunately, my camera is broken, so I have to rely on photos taken by Bonnie and her student Megan, at the links below. Geoduck! Grilled oysters with bacon! Chocolate Olympia oysters, ammonites, and trilobites! A good time was had by all.
— Bonnie J. Becker (@pisastero) July 10, 2016
— Megan Hintz (@BivalveFanatic) July 10, 2016
I’m in the middle of updating this site with a new theme and more photos. Please bear with me as I re-attach photos to blog posts.
Last quarter, I introduced some programming (in Glowscript) into my intro physics course. At the end of the quarter, I got evaluations from a number of students who said something like, “I’m not a computer science major. Why do I need to learn to program?” Besides being a marketable job skill, learning to program gives you a completely new way to solve problems in the physical sciences. Computer models are also a standard tool in every scientific discipline I’ve encountered, so if you are going to major in the sciences, you need to learn how they work.
Getting started in programming is easier now than it ever has been in my memory. There are a ton of resources online. You don’t even need to install software on your computer to get started (though I’d recommend you do). What follows are some recommendations for students in my lab, but they go from general to specific, so if you are not a paleomagnetist/geophysicist, you can read until you feel like stopping.
Before you start, consider what programming language to learn. I recommend that my research students learn Python for the following reasons:
- It’s available for free.
- Even if you want nice add-ins and an easy-to-use editor, it’s still free for academic users.
- It’s what ESRI uses for a scripting language in ArcGIS, so if you do our GIS certificate program, you’ll use it.
- Programming in Python is relatively straightforward and forgiving.
- There are tons of add-in packages for all sorts of scientific purposes: if you don’t know how to write a complex piece of code, chances are someone has already done it for you.
- There are ways to interface with all sorts of nice graphics packages like plotly and other software packages like R.
- You can document what you do in notebooks, which you can share with the lab.
- For paleomagnetists: Lisa Tauxe’s PMAGPY software is written in Python.
There are a few other languages you’ll want to learn for specialized purposes:
- R, for statistics. We use it in our environmental stats course. It’s also free, has lots of add-ins, and is in wide use in academia and industry.
- Mathematica, for specialized tasks (mainly IRM acquisition modeling). I don’t know this one all that well!
- Matlab, for linear algebra. I don’t use Matlab so much anymore, since I can do most of the same things with Python or R… which are free. However, Matlab is widely used in geophysics.
What do you need to get started in Python? Although Mac computers come with Python installed already, I’d recommend installing Enthought’s Canopy software under an academic license. That gives you not only Python and the Matplotlib graphics add-on, but a nice way to keep track of and edit your programs or notebooks. It’s free for students and faculty, though you do need to register with Enthought. Otherwise, it’s a bit of a headache to try to install (if you are using a PC) and/or update Python, Matplotlib, and all of the other required stuff individually.
There are lots of resources available to help you learn Python. A list of the major ones is here. For the basics – if you are still just testing it out and haven’t installed anything yet – I like http://www.learnpython.org/ because it allows you to try things out in your browser window. However, learnpython.org does not teach you to use the Canopy software. A Canopy academic license allows you to use the Enthought Training on Demand tutorials. The Intro to Python tutorial looks really good. Has anyone tried it? Let me know how it is! Also, Lisa Tauxe’s PMAGPY Cookbook has some notes on using Python.
As a scientist, you will also want to get familiar with the NumPy, SciPy, and Matplotlib/Pylab packages (a package is the Python term for an add-in). Tutorials for these are available at python-guide, through Enthought, and in the PMAGPY Cookbook. There’s also a cool gallery of examples for Matplotlib.
The classic Numerical Recipes by Press et al. has lots of explanations of how to do common statistical and mathematical tasks in computer code. I don’t know if there’s a Python version out now (I have an old edition for the C language), but it’s a useful place to start for scientific programming. I’ve also found the book Programming Pearls by Jon Bentley useful for some things.
Now, for the specialized paleomagnetics stuff: download and install Lisa Tauxe’s PMAGPY package. This provides you with a set of programs that you can use to plot, manipulate, analyze, and model paleomagnetic data. Most have graphical user interfaces, and Tauxe has a good set of tutorials in the PMAGPY Cookbook. But it also provides a set of functions (pieces of code for performing specific tasks) that you can use in your own programs for common plotting and data analysis tasks.
When you are coring the seafloor, the first piece of the core that the scientists get their hands on is from the core catcher. The core catcher is the little bit at the end of the tube full of sediment that lets stuff in, but not back out. The paleontologists usually get it first, so they can use the microfossils and nannofossils to tell us how old the sediment is. Here are two of our paleontologists with a bowl of mud to start their day off right.