Student Evaluations

My evals came back from last quarter. I won’t tell you “how I did”, but I will tell you that neither I nor anyone I know takes the numerical evaluations very seriously, at least without a lot of context (how students usually feel about the class, what else is going on in the instructor’s academic life, etc.). Students, please realize that this doesn’t mean we don’t care about evaluations. We really like thoughtful, constructive comments (even negative ones), which can really help improve the class for the next time it’s taught. Desperate, thoughtless comments (e.g. “un-confuse me!”), however, are useless.

All of this got me to thinking about the recent discussion on the PHYSLRNR list about the “Faculty Pay ‘by Applause Meter’” story out of Texas A&M. Basically, A&M offered faculty the chance to get bonus pay linked to student evaluations. Very few faculty signed up for the program. Most of the PHYSLRNR discussion has focused on the lack of reliability of student evaluations as a tool to determine whether students are learning. There’s also been some criticism of the particular evaluation instrument used at TA&M. What I found interesting, and what I see in my own evals, is that many courses that are more collaborative and student-centered often receive poor evaluations (according to those on the list). I think this is due in large part to the student perception that they are required to put greater effort into these courses than they are in lecture courses. Certainly evaluations do not track student learning.

But still, as someone who once led a group that pushed for student input into teaching at Scripps, I have some sympathy for the students here. Certainly there are instructors who teach without regard for whether their students are learning (is their children learning?), and this needs to be fixed. I even think that basing merit pay raises on teaching is totally fine. But basing bonuses purely on evaluations is not. What about other evidence of student learning? What about evidence that the instructor is paying attention to student learning… but that something else got in the way? Faculty peer evaluations? I now think that students are better served when the instructor is willing to try new (and maybe painful) things to get students to take their education seriously.

Miles Clowers, my faculty mentor at San Diego City College, once told me not to be afraid to fail when trying something new with a class. This advice seems to underlie many of the responses on the PHYSLRNR list. Research into education sugests that there are lots of creative ways to improve and deepen students’ understanding of course material (ed people would probably take issue with the idea that we’re supposed to focus on “material” at all… read the term in a broad sense if you wish). Of course, we should listen to our students when they have honest, constructive things to say. That is the mark of a responsive, responsible instructor. However, we do our students a disservice when we only listen to their ratings. Codifying this in the faculty pay structure will lead to narrow-minded teaching, not student-centered improvement.