Categories
Blog

Slow Down and Think

Since returning from sabbatical in 2017, I’ve taught pretty much just physics (with one or two geo courses here and there), so my physics teaching game has been on my mind a lot. The main breakthrough I’ve made in my teaching over the past few years has been to adopt standards-based grading, which has allowed me to communicate my expectations to my students more clearly. I still struggle, though, with developing good standards for my particular course and my students – item 2 from Brian Frank’s list:

A major issue has been that the standards are basically a mix of a small, disconnected tasks that I expect the students to do – these are the easy-to-assess, easy to communicate ones – and big, “squishy”, higher-order skills I want students to develop. An example of the former is:

I can differentiate between isolated and non-isolated systems both conceptually and based on data about those systems.

…or..

I can calculate kinetic energy for individual objects and systems.

On the other hand, my list from last year had standards like:

I can reason about the motion of an object undergoing constant acceleration.

I know, “reason about” is a bad clause if you are going by Bloom’s taxonomy, but I find it hard to express the bigger picture of using what the Modeling Instruction folks call the constant acceleration (kinematical) model. Can you use the model to make predictions? There’s a lot you have to be able to do in order to get there, and a lot of gradations inherent in the word “use”: you could be using the ideas well in a qualitative sense, but not have the skills developed well enough to quantify your predictions. You could rely exclusively on memorized formulas without really knowing what they imply or where they come from, but use them effectively to make quantitative predictions. So I kept “reason about”.

I’ve also found that students may be able to succeed at enough of the standards to do well in the course, but still not be able to use the skills they’ve developed in an independent way. Basically, the course doesn’t do enough to challenge the traditionally successful students, and doesn’t allow the less traditional students enough say in pursuing problems that don’t fit well on tests. Students need a way to distinguish themselves that’s true to them, not just convenient for assessment.

So I’m trying to start the year off right by re-evaluating my standards in light of what I think is the most important idea I hope students get from the course: to slow down and reflect on their ideas in a methodical, systematic way. I’ve divided the course up into topics, each corresponding to a different skill or way of thinking: measurement, descriptive kinematics, momentum, forces, and energy; rotation and gravity, which we treat at the end, are outliers – more applications of ideas treated elsewhere in the class than new ideas on their own. I want to use an approach similar to Modeling Instruction (I’ve read quite a bit about MI, but I don’t feel like I understand it well enough to adapt it for a calculus-based university course), but focusing on exploring the following aspects of each topic:

  • Making sense of experimental data
  • Describing information using multiple representations
  • Building a model and using it to reason about situations
  • Applying mathematical, logical, and communication skills
  • Reflecting on learning

I’m looking at these as if they are “folders” in which I can put my existing standards. Some of them take the place of the squishy, big-picture standards I used to have.

The advantage of this arrangement is that I can also then have a standard in each aspect/folder that asks students to do something distinctive – something that is theirs – that I can point to as a success beyond just quiz and homework questions:

  • Making sense of experimental data – I can develop my own comparisons between data and predictions from a model or simulation
  • Describing information using multiple representations – I can choose and translate fluently among the most appropriate representations of a situation
  • Building a model and using it to reason about situations – I can propose and solve significant problems using reasoning based on the unit’s main idea (or: problems that incorporate more than one unit’s ideas)
  • Applying mathematical, logical, and communication skills – I can independently identify situations in which significant mathematical reasoning or skill is needed, and use those skills competently or I can express complex physics ideas effectively in written or graphical communication
  • Reflecting on learning – I can test or otherwise identify the limits or assumptions of models or I can thoughtfully express changes in my own thinking about physics

I’m planning to grade the “small” standards on a 1-3 basis (1: misses the point; 2: getting there; 3: meets standard), and these “big” standards on a 1-4 basis (4: distinction). A student’s grade in each topic will be the highest grade in each of the aspects. So, for example, if a student has a 4 in model building for forces, they get a 4 for the force section of the course. I’d love to require students to try for distinction in more than one aspect of a topic, but I’m not sure how to communicate that to students on Canvas (i.e. grade it) – and if I can’t communicate something, then the standards-based approach loses its luster.

So, some questions for you:

  1. How can I use standards to signal to students that I want them to step back and think about what they’re doing in a methodical way? (To confront their expectations, biases, preconceptions…)
  2. What makes a “significant” standard in your experience?
  3. Have you had any experience with standards-based grading in an intro, calculus-based course at the university level?
  4. Do you have any ideas about how to improve the system I’m proposing?
Categories
Blog Orientation

Magnetic Susceptibility

My students and I are preparing to go up to Bellingham on Thursday to do some work in the paleomagnetic lab up there, so we spent today’s lab meeting getting everyone acquainted with the data they are going to collect. I started explaining something in the lab meeting that I thought could use a demonstration. So here it is.

Rocks might have lots of different magnetic particles in them. They might contain magnetite, an iron oxide that forms in igneous and metamorphic rocks as well as in soils; they might contain titanomagnetite, a common  consitituent of oceanic basalt; they might contain maghemite that formed as magnetite was oxidized — rusted — by weathering, or they might contain maghemite that formed in soils; they might contain hematite or goethite, indicating soil formation in dryer or wetter environments… there are even rock-forming minerals like pyroxenes and micas that are magnetic to a certain extent. In addition to forming in different environmental conditions, all of these minerals have particular quirks in their record of Earth’s magnetic field. So we need to be able to tell the kinds of magnetic minerals apart.

One way we differentiate between magnetic minerals is by their response to weak magnetic fields. So I tried a little experiment. I put a bunch of different materials inside a wire coil. I could send a current through the coil, producing a (weak) magnetic field at the coil’s center. I also set up a magnetometer to measure the magnetic field just outside the coil.

Demo Setup.
Demo setup, with yellow 72-wrap air-core coil connected to BK Precision 1730A power supply. Vernier current and magnetic field sensors are used to monitor experimental parameters.

Why might the field inside the coil be different from what I measure with the magnetometer? This is a secret that is not addresed in the physics textbook we use in Physics II: there are two different ways of describing magnetic fields. We call the magnetic field that the magnetometer \vec{B}, and we measure it in Tesla. But there are two kinds of things that produce that magnetic field. One is the current through the coil, and the other is whatever is inside the coil (or outside it but close by). So we say that there is an applied magnetic field (applied by the coil to whatever is inside it) as well as a little bit of extra magnetic field due to whatever we put inside the coil. In physics terms, we call the applied magnetic field \mu_0\vec{H}, and the bit of extra field from whatever is in the coil \mu_0\vec{M}. Both of these are measured in A/m. We say that:

\vec{B} = \mu_0(\vec{H}+\vec{M})

There are two ways you can get a little bit of extra field from putting stuff inside the coil. A large number of Earth materials become magnetic when you put them in a magnetic field, but then revert to what most people would call “non-magnetic” when the magnetic field is turned off. For example, if you put an iron-bearing garnet crystal inside an area with zero magnetic field, it wouldn’t attract a compass needle. But as soon as you turn the magnetic field on, the compass needle begins to deflect – ever so slightly – toward the garnet. We call that garnet paramagnetic. Other minerals, like quartz, are diamagnetic: put them in a magnetic field, and the compass needle deflects away from the mineral. For both paramagnetic and diamagnetic materials, the effect on the compass disappears when you shut off the magnetic field you’ve applied. We call this an induced magnetization.

Some materials also have a remanent magnetization – a magnetization that remains after the \mu_0\vec{H} field is switched off. Magnetite is a good example of this. Besides behaving like an induced magnet, magnetite also has induced magnetic behavior.

So: I took pieces of a bunch of different materials – steel, teflon, hematite, and various other minerals – and put them in the middle of the coil to see what would happen to the magnetic field as I increased and decreased the current. I tried the mica in two different directions (with the edge pointed toward the magnetometer, and with the flat face 45° from the magnetometer) to see if there was an effect.

Materials tested in the air-core coil. From left to right: Nails, Teflon, optical calcite, tremolite, phlogopite (Mg-rich biotite).
Materials tested in the air-core coil. From left to right: Nails, Teflon, optical calcite, tremolite, phlogopite (Mg-rich biotite), hematite, iron-rich dolomite.

Here is a plot illustrating the response of these materials to the magnetic fields produced in the coil:

The first thing you might notice is that all materials, more or less, make a linear trend on this plot. So the total magnetic field is proportional to the applied field. The biggest effect is in the bar magnets: they are ferromagnetic (the line of points does not intersect the origin, meaning that there is some magnetic field that remains when you turn off the \mu_0\vec{H} field). The rest of the materials have a different slope, varying between low (Teflon) and high-ish (empty sample holder, keys, nail).

The ratio between applied magnetic field and a material’s (induced) magnetization is called magnetic susceptibility. It is given the symbols k, κ, or χ. If you were to measure magnetic susceptibility carefully, you could identify differences between these minerals – perhaps even between the different orientations of the mica. To do that, you need to have a good idea about what the response of your magnetometer would be if your coil were empty. That’s your model for how your measurement device works. It’s just a linear equation here: y = m x (using the variables we have here, B = \chi_0\mu_0H). You can then subtract your prediction based on the empty coil model from all of your \vec{B} magnetic field measurements to see whether the stuff you put in the coil is adding to \vec{B} (ferromagnetic, paramagnetic) or decreasing it.

Here is what you get when you subtract out the empty sample holder’s response:

On these graphs, a positive slope indicates a material that behaves as a paramagnet; a negative slope indicates a diamagnet. Most of these materials behave like a combination of the two – not a particularly steep positive slope (except for the bar magnets) and a variety of negative slopes. The Teflon rods have the steepest negative slope because they contain the most diamagnetic material. Because ferromagnetic materials retain a \vec{M} when the applied magnetic field is reduced to zero, their behavior shows up as a vertical offset of the whole graph, as seen in the bar magnets and in the hematite, below:

Here is the R code for the graphs:

…and the data file.

Categories
Blog Courses

TESC 419: Environmental Geophysics

Have you ever been curious how geoscientists know what’s under their feet? In Environmental Field Geophysics (TESC 419), you will learn to use seismic, magnetic, and gravity surveys to investigate the shallow subsurface environment. The course is a project-based introduction to practical geophysical tools and data analysis, along with some of the physics that makes those tools work. The 7-credit field course, scheduled for this coming Autumn quarter (2015), meets Fridays and has local field trips. It counts as a field course for the geoscience degree option. Physical geology with lab and one quarter of introductory physics are prerequisites. Please contact Peter Selkin (paselkin@uw.edu) soon for an add code.