A study that I was told about last year was just released in the latest issue of Science — in A Better Way to Teach? my colleagues in the Science Education Initiative at the University of British Columbia detail their rather elegant study. Picture this — a young upstart postdoc approaches a veteran teacher, who is confident in his skill as a lecturer. He proposes a competition: I’ll teach one section of your course interactively for a week, and you do your very best to lecture to those students on the same topics. Then we’ll see who wins on students’ post-test scores.
This is a very unusual comparison study — allowing one to compare students with similar backgrounds, who have been prepared in the same way up until that point, and receive different instruction on the same topics, with the same learning objectives as the end-goal. And what’s so delightful too, is that the traditional instructor agreed to the competition — a testament to the lead author (Louis)’s charm and good nature.
The results? Those in the transformed section did twice as well on the post-test. Ka-ching! Below is the original graphic (which the authors helpfully provided as a download for teachers to use; a great tool to show your students why you’re doing something different from the norm):
However, a NY Times article on the study was more cautionary. In particular, they note that 150 students were absent from the test — and it’s tough to say how that would have affected the results. Though in my experience, typically more of the weaker students come to class in a transformed section (these are the students who blow off class in traditional lecture) — and such an effect would tend to skew the results of the transformed class downward, not upward as was observed.
The Times also points out that the study authors also taught the transformed class, which could skew the results: sort of like The Hawthorne effect. Students know they’re part of something special, and respond positively to the innovation, especially when they know that it will make the researchers happy. And since the researchers developed the assessments, it’s tough for them to not keep the test results in mind as they teach (though to be fair, the other faculty member also saw the test items in advance — and reportedly liked the questions and thought his students would do well on them).
So, while the study has some threats to its validity, it’s hard to do a true experiment in classroom settings, and this is pretty good.
What do you think?
The original article is here.