In this video is a brief overview of the process and procedures I have been using to analyze soil samples from two community gardens and student collected samples from the MSL program. The process of slide creation and usage of a UV-vis analyzer are both shown. The data from this will be compared to standard derivative graphs to determine if iron oxides are present. This is important to us because this an indicator of pollution.
In the spirit of the weblogs of old, I’m going to start posting a few useful things I’ve read. No theme, no big insights – just a log of what I’ve read.
I teach our introduction to research course in Environmental Science (a.k.a. “Environmental Research Seminar” or “Junior Seminar”), and usually incorporate a few modules about responsible research conduct. I just found out about what’s often called “helicopter research” or “parachute research”, which is the opposite of responsible. I may include one or more of these readings to introduce research ethics and stakeholder involvement:
- Gatta, M. (n.d.). Scientists are producing data without sharing it with people who actually need it. Retrieved May 27, 2020, from https://massivesci.com/articles/papers-conservation-science-communication-community-outreach/
- van Groenigen, J. W., & Stoof, C. R. (2020). Helicopter research in soil science: A discussion. Geoderma, 114418. https://doi.org/10.1016/j.geoderma.2020.114418
- Rochmyaningsih, D. (2018). Did a study of Indonesian people who spend most of their days under water violate ethical rules? Science. https://doi.org/10.1126/science.aau8972
- Helicopter research and data collection in the global South by AfriCan Geopardy [podcast]. (n.d.). Retrieved May 27, 2020, from https://anchor.fm/african-geopardy/episodes/Helicopter-research-and-data-collection-in-the-global-South-e56klh
I’m gearing up to teach a research experience course in the fall, and am considering teaching a pre-class unit on equity and fieldwork. I’d love students to read these two articles with what appear at first glance to be contrasting conclusions:
- Giles, S., Jackson, C., & Stephen, N. (2020). Barriers to fieldwork in undergraduate geoscience degrees. Nature Reviews Earth & Environment, 1(2), 77–78. https://doi.org/10.1038/s43017-020-0022-5
- McNulty, J. (n.d.). Field courses boost STEM diversity, study reveals. Retrieved May 27, 2020, from https://news.ucsc.edu/2020/05/beltran-diversity.html [Note: this is a press release, not the original study, which doesn’t seem to be available from the journal yet.]
A couple of threads on Twitter with lessons learned from this remote/online term:
- What is full participation? “Robust asynchronous learning requires that students can engage (as full participants) no matter how and when they are available. If we want to provide access to students who can’t be present synchronously, making them flies on the wall after the fact isn’t enough.” @Jessifer
- What are we learning so far (about remote learning)? @WardHydro
- Faculty should listen to URM student perspectives (and ask students what they think!) @Napaaqtuk
From my own students: I asked them to think about what was working in their classes. Flexibility was number one – faculty help most when they are most flexible and responsive. Also, checking in, allowing students to plan ahead, incorporating discussion boards in an authentic way, and facilitating group work.
Also: did I post this already? The Teaching Practices of Award-Winning Online Faculty (paywall) echoes a number of these sentiments.
- Assign roles in lab groups to improve equity: Quinn, K. N., Kelley, M. M., McGill, K. L., Smith, E. M., Whipps, Z., & Holmes, N. G. (2020). Group roles in unstructured labs show inequitable gender divide. ArXiv:2005.07670 [Physics]. Retrieved from http://arxiv.org/abs/2005.07670
- The idea of “fixing the student” is a problem: Asai, D. J. (2020). Race Matters. Cell, 181(4), 754–757. https://doi.org/10.1016/j.cell.2020.03.044
- A whole issue of The Physics Teacher on equity (especially gender equity) in the physics classroom.
- Along similar lines, thanks to Beck Strauss (@BeckEStrauss) for turning me on to this paper about LGBT+ inclusion in physics: Ackerman, N., Atherton, T., Avalani, A. R., Berven, C. A., Laskar, T., Neunzert, A., et al. (2018). LGBT+ Inclusivity in Physics and Astronomy: A Best Practices Guide. ArXiv:1804.08406 [Astro-Ph, Physics:Physics]. Retrieved from http://arxiv.org/abs/1804.08406
- Has anyone else used this online text? I’m considering it for the fall. van der Pluijm, B. A., & Marshak, S. (2016). Processes in Structural Geology and Tectonics. Unpublished. https://doi.org/10.13140/RG.2.1.2845.9126
I’ll confess: I’m pretty bad at the whole asynchronous online teaching thing. Although my classes are nominally asynchronous – I don’t grade anything that we do during or as a result of synchronous online discussions – I have a lot of trouble figuring out how to make learning happen without a discussion, and I have even more trouble getting that discussion to happen in the usual environments that online classes use. So I’ve been doing a bit of reading about online discussions, and some thinking about the problem. Here are some links along with short descriptions, as well as some of my thoughts.
A large chunk of the “how-to” material on using discussion boards tends to focus on tricks to get students to interact using discussion boards. For example, “Five Twists for Online Discussions” has some good ideas along those lines. Tips like these presume that you are setting up discussion boards for your course, and that you are asking questions and monitoring responses: the instructor is in the driver’s seat. This is the model I’ve been using, and seems to be a pretty common one.
A second (and perhaps larger) fraction of the posts on online discussion boards focus on the management of the boards – how and whether to monitor them, how often to post, etc. Of these, I’ve found “The ABCs Οf High Quality Online Discussions” and “Discussion Boards: Valuable? Overused? Discuss.” particularly helpful. Both articles recommend that the instructor not take too active a role in the discussion, but rather work behind the scenes (email, etc.) to encourage all students to participate. The academic literature I’ve read on classroom discussions (most of it about face-to-face) notes the value of student-to-student interactions, and that’s something I’ve had problems getting students to do. Along those lines, “Student-Centered Remote Teaching: Lessons Learned from Online Education” breaks down interactions in a way that’s been helpful for me. Keep in mind that student experiences can vary, and it’s useful to have a plan for how to predict them – I found the papers “Caution, Student Experience May Vary: Social Identities Impact a Student’s Experience in Peer Discussions” by Eddy et al. and “Creating effective student engagement in online courses: What do students find engaging?” by Dixson useful in that regard.
Finally, I’ve encountered a few posts and papers specifically about the role of discussions in online courses. In “Bringing out Students’ Best Assets in Remote Teaching: Questioning Reconsidered“, Funmi Amobi considers the instructor-centeredness of the traditional discussion forums, and proposes some ways to give students more control of the discussion. I suspect this might encourage students to value the discussions more (a key point in Eddy’s analysis) and thus to participate in a deeper and more significant way. A recent (paywalled) article in The Teaching Professor, “Solutions to Online Discussion Problems” also focuses on how interactions are integrated into courses. The latter in particular considers the design of questions (Who and what role are the questions serving? Who makes them up?) and how to grade discussions. The idea of group grades for discussions (more deeply discussed in “A Better Way to Assess Discussions“) might be handy.
The design of tools for online discussion also affects how those discussions happen, which isn’t quite the theme of “Student Centered Social Interaction Online” (also paywalled), but is a subtext of it. The article likens LMS discussion boards to “internet forums from the 1990s,” which I think is supposed to be a dig at discussion boards. As someone whose internet experience is rooted in the 90s, I take a bit of offense at this, but I don’t think the point is lost. The discussion boards on Canvas, for example, are severely lacking: although there’s an option for threaded replies (seeing a post and its responses together, and being able to collapse those replies into the post), it’s not selected by default, and it’s clunky to use. Posts are not organized by subject heading (there is no subject heading) and the search/collapse interfaces are unwieldy after a discussion reaches about 60 posts. You can’t access grading tools and the entire thread of a post at the same time, so it’s impossible to grade posts in context (as one might want to do if one assigns grades based on “moving a discussion forward”, as one of the previous links suggests). The tools for organizing posts are present on nearly all modern discussion boards – Stack Exchange, Reddit, Quora, and even support sites such as the Canvas Community are still in common use and contain similar threading capabilities. I think that Piazza has the kinds of threading and organization capabilities that I think might make for good discussions, so I’m planning on trying it this summer. CourseNetworking offers some of the post-response organization tools that might be familiar to students from social media, but may be too much new stuff for me to dump on students. Along very different lines, I’ve used Perusall in the past, but have struggled to incorporate text annotations into my teaching, and I don’t think they facilitate the kind of discussion I want to encourage, though they may be a way to encourage student-content interactions.
In a big-picture sense, what I’d like to try when I teach next online (this summer) is a discussion model based on online support sites. Instead of the instructor-centered model (“Class Discussion Boards” below), I’d like to try something in which students start and respond to question threads, perhaps with a group focused on generating the questions each week/half week/day. In the table below, () indicate events that happen sometimes, or maybe don’t happen at all. I’m using my experience on the Canvas support boards as an example of a “real” board.
|Class Discussion Board||“Real” Online Discussion||Hybrid Idea|
|Instructor Posts Question||User Has a Problem||Instructor Poses Task|
|Student Responds||User Searches Board for Answers||Student Groups Discuss Task Separately|
|(Students Clarify Or Ask Additional Qs)||User Posts Question||Group Members Determine Sticking Points|
|Instructor Asks Follow-Up||Other Users Respond||Group Members Post Questions|
|(Students Respond)||(User Clarifies)||Other Students Respond|
|(Students Agree or Disagree)||Other Users Respond||(Group Members Clarify)|
|(Discussion Informs Student Responses Elsewhere)||(Tech Support Clarifies)||Other Students Respond|
|Answers Graded||(Answers Get “Likes”)||(Instructor Clarifies)|
|Answers Used in Real Life||Answers Return to Group, Used for Task|
So: what do you do to encourage student-student interactions in asynchronous online discussions? What issues have you encountered? What have you read/written about it?
Dixson, M. D. (2010). Creating effective student engagement in online courses: What do students find engaging? Journal of the Scholarship of Teaching & Learning, 10(2), 1–13.
Eddy, S. L., Brownell, S. E., Thummaphan, P., Lan, M.-C., & Wenderoth, M. P. (2015). Caution, Student Experience May Vary: Social Identities Impact a Student’s Experience in Peer Discussions. CBE—Life Sciences Education, 14(4), ar45. https://doi.org/10.1187/cbe.15-05-0108
I was just listening to this excellent podcast about creating screencasts (from the Cult of Pedagogy) for online or remote learning, and thought I should share it with my readers. The wonderful thing here is that it’s mostly not about the tech itself: it’s about the (minimum) use of tech in the context of research-based teaching techniques. Take away notes: screencast videos are good, make them short, make them personal rather than perfect, show stuff on the screen but keep it aligned with what you say, and embed checks for understanding if possible. Note that Jennifer Gonzalez and Kareem Farah, who are on the podcast, are mainly talking about K-12 education rather than college, but a lot of the same ideas apply.
Along with that, I thought I’d share my own setup for screencasting:
For my main computer, I use an old-ish Lenovo T450 laptop (13″ screen; i5 processor) with extra memory. It’s pretty bomb-proof, it rarely crashes, and I love it, but there’s no touch screen, so recording any written stuff is tough… and I do a lot of writing in my teaching. So as an input device I use my ancient Samsung Galaxy Note 10, which I have plugged into my computer. It has a stylus, so I can write on it. I wish I had a paper-like screen surface so I could write and draw more accurately, but I can’t find ones on sale anymore for such an old tablet.
My main recording tool is Panopto. UW has a license, it’s nicely integrated with Canvas, students know how to use it, it’s fairly flexible, it does auto-captioning and simple editing, and it even allows in-video quizzes (which I have not used yet). On Panopto, I record my PC screen and my voice – I don’t usually show my own face, except in introductory videos. I mirror my tablet screen on my PC using scrcpy, which is open source. It’s a bit kludge-y but it works.
I typically planned my flipped in-person classes by looking at a set of content standards I identify at the beginning of the course. These are things like “be able to solve quantitative problems in which Gauss’s Law reasoning is necessary” or “be able to calculate the radius and period of circular motion of a charged particle in a magnetic field”. I then do the detailed planning for each class in two stages: first, I’d identify a set of questions related to the day’s standard(s) that I’d like students to be able to do during class time. I split these into conceptual or semi-quantitative questions (which I used to do in Poll Everywhere, but now do as Zoom polls) and quantitative questions (which I have students do together). Then, I’d put together a video explaining the concepts and demonstrating skills (including simple examples) that students would need to complete the in-class questions. I struggle with the length of these videos. They are generally less than 45 minutes, but the research says to make them shorter. In the future, I expect to divide them into pieces to “chunk” for shorter attention spans. Students do generally appreciate them, though.
Before my senior year of high school, I spent a summer in a college program at UC Santa Barbara. I took a physics class through their College of Creative Studies (CCS). The most memorable part of physics, aside from the 10 or so friends I made in the class and the professor’s low-key teaching style, was the final exam. We each had to solve a problem at the chalkboard, with some help and prompting from our fellow students and the professor. I can still remember my problem: how long does it take to fall through the Earth? I think I summoned up all my physics knowledge to solve it and was both excited and terrified when I did. The sense of accomplishment and belonging from all of us in the class afterwards was palpable. I look back on that experience a lot as a pole star in my teaching journey. I’d love to give my students something equally rewarding.
Most of my teaching is in the introductory physics sequence at UW Tacoma. My classes are generally smallish, around 20-40 students, considerably more than my 10 CCS physics classmates. Board problems are tough with that many students. In addition, my students have issues that would shift the balance on board problems from excitement toward terror. Some are English language learners. Many have jobs and families and so aren’t able to devote the kind of attention to physics that we could do in a residential college. Some deal with the additional weights of racism and gender stereotypes that make public displays of problem solving additionally difficult. And, of course, underneath all of this is the pandemic: we are all staying at home and adapting to learning remotely. Nonetheless, I wanted to give students a way to feel the same sense of accomplishment that I felt on completion of their main projects in physics.
A few years ago, while adapting my course to use standards-based grading (which I no longer do; more on that in a future post), I came across a solution. For the past two quarters, I have used an approach based on the work of Andy Rundquist, physics professor at Hamline University in Minnesota: instead of a midterm and a final (and in addition to weekly content quizzes), students submit 2- to 5-minute videos in which they show and discuss solutions to physics problems of their choice. Rundquist’s work describes the benefit of including students’ voices in the problem solutions that they do for assessment purposes. Including student voice allows the instructor to evaluate not only whether the student can come up with a reasonable solution to a problem (which they often do), but to hear the student’s thought process associated with that solution. Overall, I have been impressed by the solutions that students – even students who are struggling in other aspects of the course – submit. Even if the students are getting help from other sources, I see their ability to explain their work on video as a demonstration of their knowledge.
The video problems require substantial scaffolding for both technical (e.g. posting videos) and pedagogical (e.g. choosing problems) reasons, but that scaffolding is scalable. As far as technical scaffolding goes, I have learned that students need some constraints on how videos are submitted. This is both to ensure consistency in how I can access the videos, and to maximize support when there are problems. I’ve therefore given students a clear set of requirements for the videos and required that students submit a video introducing themselves to the class as a way to iron out wrinkles in recording and posting. Students are also required to post the first half of their videos as drafts around mid-quarter so that I can give feedback early. In my experience, more of the students’ issues are pedagogical than technical. For pedagogical scaffolding, I use written problem sets as a way to help students understand what a complete solution looks like, and discussion boards to emphasize explaining one’s reasoning. Students also have ample opportunity to get feedback through an (ungraded) online practice platform as well as through (graded) weekly quizzes.
One key difference between my approach and that of my CCS Physics professor is that I require students to choose their own problems within certain limits. This gives students some additional agency in their problem solutions. The students choose problems from their textbook on their own subject to some constraints: the problems need to be suitably complex (involving multiple steps of reasoning, not just application of a single concept or an equation) and need to fulfill one of a set of standards I identify at the beginning of the course (e.g. being able to reason about electric fields and charge using Gauss’s Law; being able to predict the current and voltage through elements in a complex circuit). Students are fairly good at identifying problems that satisfy the first constraint, but sometimes need guidance on the second. I have begun giving students a list of textbook sections that cover material associated with each standard. This not only helps guide students to relevant problems, but it helps students understand how I communicate my expectations in the content standards. Some students modify textbook problems or make up their own, but I require students to run such problems by me before submitting solutions: it is easy to come up with an ill-posed problem.
Grading is the most time-consuming part of video problems. For that reason, I limit video length to 5 minutes, I cap the number of videos at 10 per student, I have students submit the videos in two sets (5 at midterm time and the rest at finals time), and I watch the videos at 1.5x speed as I grade them. I use a holistic rubric to grade solutions, which also speeds the process. I grade video solutions on a scale of 1-4; I think of 1 being equivalent to a “revise and resubmit” editorial decision from a journal, and 4 demonstrating both a correct solution and fluency with relevant concepts (as evidenced by the student’s explanation). Because a growth mindset is important in my course, I do allow revisions up to a certain date.
I have been trying to notice any issues with equity that this assignment brings up. There are a few that I plan to work on. First, some students have issues with the personal nature of the videos. One student, an English language learner, let me know that they were uncomfortable recording their voice. We came up with an alternative assessment for that student, but I hope to get some expert input to better support students in that position in the future. Additionally, I spoke to one other student who was reticent to record a video because they thought that the video needed to have their own image in it. I clarified that having an image of the problem solution was more helpful, which seemed to put the student at ease. Second, I imagine that some students have trouble recording because of issues with access to technology. So far, more students have been able to record videos with cell phones than have participated consistently in other online aspects of the course (e.g. Zoom meetings, online practice problems, discussion board posts). Nonetheless, I welcome feedback on how to overcome this concern. Based on my observations from last quarter, I think that the asynchronous, self-directed, scaffolded nature of the assignment and the ability to give feedback and revise solutions has allowed students from a broad range of backgrounds to be successful.
In the end, student videos have been a joy to watch. I am able to see and hear students’ voices as they think aloud through challenges, and I am proud. I hope that they get as much out of the challenges as I did from my board problem.
With the rapid transition to remote teaching that came with Washington’s “Safer at Home” measures, I’ve been working on both tools (pedagogical and technological) and modifications to my teaching to help my students transition to online learning. I thought I’d start sharing those tools here.
If you’ve taught online at all, you might have noticed that it’s really hard to get students to interact, either with each other or with you. This is particularly true when trying to do synchronous instruction. First of all, let me put this out here: I’m really only doing synchronous meetings to give students a chance to interact and hash out their ideas (suuuper important in physics), not as a way to teach content or to do any assessment. Synchronous meetings aren’t mandatory in my teaching, but I do feel that they are helpful as a way to maintain connections. So: what are some good ways to do that? I’m thinking particularly about getting students to interact on Zoom, and to share out answers.
One of my colleagues, Jenny Quinn, recently noted on social media that she put together a class list that she’d randomize and post before each Zoom meeting. This accomplishes two things: it gives students a heads up as far as what order they’ll be called on in a particular meeting (it’s always best when students can plan ahead when online), and it helps her call on students in an equitable way. It’s a good way to get students to interact at least with the instructor; student-student interactions are more difficult. More on those later.
I decided to try Dr. Quinn’s approach this week, so I put together a Google Sheet with my class list. I loaded it with my class list, a pointer to the student’s name who’s currently being called, and a set of macros that do the following:
- Advance the pointer to the next student’s name
- Go back to the previous student if there’s a mistake
- Re-order the list randomly
- Clear the pointer to the first student’s name on the list, if there’s a big mistake
The back end: There’s one page where you can paste your list of students’ preferred names/nicknames. I’ve got space for up to 500 – I don’t see this method as viable for more than that (I generally teach 20-40 person classes)! The student names are then auto-counted and a number of slots are generated according to the number of students in the class. Each name is given a random number. There is a counter at the top of the random number list that shows the number of the student you are calling on. On the right are four buttons that you can press to issue the four macros listed above: next (down arrow), previous (up arrow), randomize (u-turn arrow), and clear (cloud). These macros can also apparently be issued by pressing CTRL+ALT+SHIFT+7 through CTRL+ALT+SHIFT+0, though I don’t seem to be able to get that to work on my browser.
The front end: There’s also another page to which you should give students a link. That page just contains the list of slots, student names, and the indicator of who’s been called and who’s next. Be careful of embedding this page on your class website/LMS: if you use Google Sheets’ Publish functionality, the sheet won’t update until 5 minutes after you change it, so the call status indicator won’t be relevant. There is apparently a workaround, but I haven’t been able to get it to work correctly for me. Specifically, students would need to refresh the page manually each time the call status indicator changes – not something I want students to have to do. If you have a way to make this work, let me know.
How to use this: Open a tab to the Prototype Random Call List sheet. Choose File > Make a Copy and call the copy whatever you’d like. You may need to edit the permissions on the “back end” sheet so that you and not your students can edit the class list. If there is not a lock icon on the tab for the “back end” sheet, you can change permissions by choosing Data > Protected Sheets and Ranges and selecting “Add a sheet or range” and choosing the options that allow only you to edit the sheet. Then paste your class list into the “Name” column (starting in row 2). Then click the “Share” button at the top of the sheet, and click “Who Has Access?” Click the “Link Sharing” button to change the settings so your students can see the sheet. Post the link to your website or LMS.
Let me know if this is useful, or if you have feature requests.
Next week: the wonderful world of assignment sheets (or: how what I learned in 6th grade comes back to haunt me).
A PDF of the full poster is available.
Earlier this week, while looking at Vanderbilt U’s excellent Bloom’s Taxonomy page, I came across this highly useful link: a PDF of Tools for Teaching by Barbara Gross (1993). It’s a bit lecture-heavy for me, but I really like the “Asking Questions” section.
Since returning from sabbatical in 2017, I’ve taught pretty much just physics (with one or two geo courses here and there), so my physics teaching game has been on my mind a lot. The main breakthrough I’ve made in my teaching over the past few years has been to adopt standards-based grading, which has allowed me to communicate my expectations to my students more clearly. I still struggle, though, with developing good standards for my particular course and my students – item 2 from Brian Frank’s list:
Easier said than done, because it’s super hard to actually have the three things it requires : (1) having an explicit idea about what it is you want students to do, (2) having that idea be a good idea, and (3) having a way of making that transparent and accessible to students.— Brian Frank (@brianwfrank) September 9, 2019
A major issue has been that the standards are basically a mix of a small, disconnected tasks that I expect the students to do – these are the easy-to-assess, easy to communicate ones – and big, “squishy”, higher-order skills I want students to develop. An example of the former is:
I can differentiate between isolated and non-isolated systems both conceptually and based on data about those systems.
I can calculate kinetic energy for individual objects and systems.
On the other hand, my list from last year had standards like:
I can reason about the motion of an object undergoing constant acceleration.
I know, “reason about” is a bad clause if you are going by Bloom’s taxonomy, but I find it hard to express the bigger picture of using what the Modeling Instruction folks call the constant acceleration (kinematical) model. Can you use the model to make predictions? There’s a lot you have to be able to do in order to get there, and a lot of gradations inherent in the word “use”: you could be using the ideas well in a qualitative sense, but not have the skills developed well enough to quantify your predictions. You could rely exclusively on memorized formulas without really knowing what they imply or where they come from, but use them effectively to make quantitative predictions. So I kept “reason about”.
I’ve also found that students may be able to succeed at enough of the standards to do well in the course, but still not be able to use the skills they’ve developed in an independent way. Basically, the course doesn’t do enough to challenge the traditionally successful students, and doesn’t allow the less traditional students enough say in pursuing problems that don’t fit well on tests. Students need a way to distinguish themselves that’s true to them, not just convenient for assessment.
So I’m trying to start the year off right by re-evaluating my standards in light of what I think is the most important idea I hope students get from the course: to slow down and reflect on their ideas in a methodical, systematic way. I’ve divided the course up into topics, each corresponding to a different skill or way of thinking: measurement, descriptive kinematics, momentum, forces, and energy; rotation and gravity, which we treat at the end, are outliers – more applications of ideas treated elsewhere in the class than new ideas on their own. I want to use an approach similar to Modeling Instruction (I’ve read quite a bit about MI, but I don’t feel like I understand it well enough to adapt it for a calculus-based university course), but focusing on exploring the following aspects of each topic:
- Making sense of experimental data
- Describing information using multiple representations
- Building a model and using it to reason about situations
- Applying mathematical, logical, and communication skills
- Reflecting on learning
I’m looking at these as if they are “folders” in which I can put my existing standards. Some of them take the place of the squishy, big-picture standards I used to have.
The advantage of this arrangement is that I can also then have a standard in each aspect/folder that asks students to do something distinctive – something that is theirs – that I can point to as a success beyond just quiz and homework questions:
- Making sense of experimental data – I can develop my own comparisons between data and predictions from a model or simulation
- Describing information using multiple representations – I can choose and translate fluently among the most appropriate representations of a situation
- Building a model and using it to reason about situations – I can propose and solve significant problems using reasoning based on the unit’s main idea (or: problems that incorporate more than one unit’s ideas)
- Applying mathematical, logical, and communication skills – I can independently identify situations in which significant mathematical reasoning or skill is needed, and use those skills competently or I can express complex physics ideas effectively in written or graphical communication
- Reflecting on learning – I can test or otherwise identify the limits or assumptions of models or I can thoughtfully express changes in my own thinking about physics
I’m planning to grade the “small” standards on a 1-3 basis (1: misses the point; 2: getting there; 3: meets standard), and these “big” standards on a 1-4 basis (4: distinction). A student’s grade in each topic will be the highest grade in each of the aspects. So, for example, if a student has a 4 in model building for forces, they get a 4 for the force section of the course. I’d love to require students to try for distinction in more than one aspect of a topic, but I’m not sure how to communicate that to students on Canvas (i.e. grade it) – and if I can’t communicate something, then the standards-based approach loses its luster.
So, some questions for you:
- How can I use standards to signal to students that I want them to step back and think about what they’re doing in a methodical way? (To confront their expectations, biases, preconceptions…)
- What makes a “significant” standard in your experience?
- Have you had any experience with standards-based grading in an intro, calculus-based course at the university level?
- Do you have any ideas about how to improve the system I’m proposing?
For the past couple of years, I’ve been looking into ways to get my students to think about responsible conduct in science. I’ve been looking for short readings, but haven’t come up with much (though I’d appreciate any you may want to share in the comments!). But today, in catching up on old episodes of one of my favorite podcasts, Our Warm Regards, I heard a discussion that might just do the trick. In the episode “The Dangers of Doing Science in the Field“, regular host Jacquelyn Gill, visiting host Sarah Myhre, and guest Jane Willenbring have a wide-ranging discussion touching on field safety, sexual harassment, macho culture, and who gets to do science. All three climate scientists are women who had harrowing experiences in the field. Their first-hand stories are at the same time raw and personal as they are indications of deeper cultural problems in science and academia.