New meta-analysis: Active learning improves student performance

by Stephanie Chasteen on March 27, 2015

It’s not quite so new anymore, but still exciting!

While we have more and more data that active learning techniques improve student learning, this field has been sorely needing a systematic review of the evidence on active learning. Recently, a crackerjack team of education researchers stepped up to the plate with just what I’ve been looking for; a lovely meta-analysis that highlights the impact of active learning – across disciplines, classrooms, and study designs (Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, and Wenderoth, PNAS, 111(23), 2014).

A meta-analysis is a study in which other studies (rather than people) are themselves the object of research. For this meta-analysis, 225 studies were chosen
to determine whether active learning improves exam scores and/or lowers failure rates. Active learning was defined (after extensive analysis) as follows:

“Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It also emphasizes higher-order thinking and often involves group work.”

All the studies contrasted traditional lecturing with some kind of active learning intervention, which occurred within class or recitation time, in any STEM field. These studies analyzed the effects on student learning on exams, concept inventories, and other types of outcomes. The power of a meta-analysis is to be able to coalesce such diverse outcome data into a single measure – the average effect size. In this case, the effect size is a weighted average of the differences between traditional and active-learning classrooms, adjusted to a common scale.

Here are the core findings of the study:

  1. Student performance on exams increases by 0.47 standard deviations in active learning classrooms – roughly equivalent to 0.3 point increase in final grade.
  2. Students in traditional classrooms are 1.5 times more likely to fail than students in courses with active learning (failure rates were 33.8% and 21.8% respectively).
  3. These results held true across different STEM disciplines, in courses for majors or non-majors, and in lower- and upper-division courses.
  4. Effect sizes were greater for concept inventories than for instructor-written exams.
  5. Active learning had the greatest impact in courses of 50 students or fewer.

Point #4 above is interesting, that the effect sizes were greater for concept inventories than for traditional exams. Concept inventories are tests, developed by researchers, to assess student learning of difficult concepts. They do not contribute to a student’s grade, and allows comparison across courses, instructors, and institutions. The authors note that concept inventories often test higher-order cognitive skills (like critical thinking, rather than memorization). Active learning has been shown to have a greater impact on these higher-order skills, which may result in the higher gains on concept inventories. Additionally, instructor exams are likely to have variable levels of reliability and validity – in short, they may or may not measure the student learning that the instructor hopes that they will.

But what about the fact that we tend to publish studies that show an effect, and not publish null results? The authors took that into consideration, and found that an additional 114 studies showing no effect would be needed in order to erase the results on student performance; and an additional 438 studies would be needed to erase the results on failure rate. Also, this effect size is likely an under-estimate due to the fact that active-learning classrooms retain more students (including underperforming students) than do traditional ones. Lastly, as the authors point out, the effect size they calculated nearly matches that of two other meta-analyses.

These results are very compelling, but the authors do note that it’s hard to know if all faculty would achieve these results if they were forced to use active-learning techniques. The volunteer faculty in this study may be more motivated, or more skilled in such techniques. Also, most studies do not report randomized trials, which does reduce the robustness of the results.

Some highlights of the authors’ discussion of the results:

  • If education were a randomized medical trial, such effect sizes on failure rate would be halted for ethical reasons
  • The failure rates cited in the study convert to over US $3,500,000 in lost tuition dollars, which could be saved through the use of active learning techniques
  • Retaining students in STEM disciplines would help meet our national “pipeline” problem – which is often impacted by low passing rates and low grades in STEM courses

So, gosh, it’s hard to justify not using active-learning techniques in class!

Resources:

This is a repost of my article on the i>clicker blog. 

{ 1 comment… read it below or add one }

Stephanie Chasteen August 24, 2016 at 4:24 pm

test

Leave a Comment

Previous post:

Next post: