Last week I wrote about using COPUS observations to document what happens in a classroom. This week, I wanted to talk about some of the survey measures we’re using to document change, and how.
It’s tough to ask people about how they teach. This was pointed out to me recently in my evaluation of the Workshop for New Physics and Astronomy Faculty. I asked the same question of participating faculty, pre- and post-workshop, “In the lecture portion of your course, please estimate the percentage of class time spent on active learning strategies.” I’ve been suspicious of this question from the start, since it’s so subjective. When we compared responses pre and post workshop, we found the following:
So, yeah, not so consistent, just one month later. I would argue that their practices probably haven’t changed that much — and that being in the workshop probably changed their perception of their practice, rather than the practice itself. Maybe some people increased their level of reported use of active learning, because they’d just been in a workshop about the importance of active learning and so they were biased to report that they follow the workshop recommendations. Perhaps others realized that what they called “active learning” isn’t so much what education folks consider active learning, so they changed their reports to be lower than before. Of course, the reports are within 20%, so that’s not too bad if you wave your hands at it — but some changed their reports by quite a bit.
Other work has also found that it’s even hard to ask people what teaching methods they use, such as “Peer Instruction.” Henderson and Dancy have found that people will say that they use a technique like Peer Instruction, but they also report that they don’t do core pieces of that technique, like having students talk to each other. So, is that really Peer Instruction? Often instructors won’t recognize that these instructional techniques include certain key features; they just think that if they have students work with their peers at all, then they’re doing Peer Instruction.
There are a few surveys of teaching practices that try to get at the specifics of practice in a more systematic way. There is a really wonderful recent paper which reviews all the survey instruments for measuring postsecondary teaching practice by Williams et al. I recommend it highly for anyone interested in this area, as it discusses instruments that I wasn’t even aware of.
One of the ones that is in pretty broad use is the Teaching Practices Inventory, developed by Wieman and Gilbert. It’s intended to be a quick (10 minute) survey of a variety of research-based practices. At the link above, you can take the TPI, and get anonymous results that compare you to national TPI results, and you can download a Qualtric version of it to administer yourself — handy! A lot of people are using the TPI, so it seems to fit a particular niche in STEM education evaluation. It seems to stack up well in the Williams et al. article, but I need to read that in more detail.
But we often need to identify not just how a class is taught, but how it has changed. Here are some ways we’ve used the teaching practices survey (and the COPUS observation tool) to document change:
- Administer to an instructor teaching a course during a “baseline” semester, before a course is modified, and then give to an instructor teaching the transformed course.
- Identify a similar “comparison” course, with similar student population, structure, and place in the curriculum — and give the survey to the instructor teaching that course as a comparison.
- Give to instructors in a department as a whole at the beginning of a large educational project, and then to the department as a whole later in the project. You can see large shifts in the average scores.
- Give to an individual instructor, along with comparative data from other instructors, to generate a discussion on reflective teaching practice.
I’d love to hear others’ experiences with some of these instruments, and other uses of them. One thing that many people argue we do NOT want to do with these instruments is use them to evaluate teaching practices. But why the heck not? Isn’t it a better measure than teaching evaluations? I can understand not using measures of student learning, since that’s not always within the instructor’s direct control, but their teaching practices are in their control. We have this culture of choice in higher education, in which we don’t dictate how teachers teach, and there is pushback at anything that looks like an attempt to control teaching methods. I do feel there should be choice; but also more accountability than there is in the current system.