Reacting to their votes: Instructor agility

by Stephanie Chasteen on April 10, 2015

You don’t know how your students will vote on a clicker question, but you can anticipate and prepare yourself for the likely outcomes. It’s really important to use a clicker system which lets you have a sneak-preview of student responses – as i>clicker does, shown below. This lets you “hold back” the histogram from students until you’ve decided where to go with the question. Remember, giving the answer stops student thinking!

i>clicker base

I’m assuming you’re using Peer Instruction – where students vote on a question individually first, and then talk to their neighbors and re-vote after that peer discussion. I think it’s important to give students the chance to vote on their own first, because it gives everyone a chance to process the question (even those who are underperforming, or are English language learners).

So, your main decision point comes after you’ve gotten student responses to the first vote.

Scenario #1
What might you do if you get the following vote distribution (where C is the correct response)?

Graph

A lot of you might say that you’d just discuss the question and move on, without having students talk to their neighbors. That’s fine, but with two caveats:

  1. A lot of students didn’t get the right answer – if you add up all those numbers on the unpopular choices, it’s a sizeable fraction of the class. So, I think it’s really important to always talk about why the right answers are right, AND why the wrong answers are wrong.
  2. What is your cutoff at which you no longer have students turn and talk to their neighbors? For us, it’s 80% – if less than 80% got it right, we want students to talk it out, because the question is challenging enough. You should determine your own cutoff in advance, so you’re not hemming and hawing in front of the class.

Scenario #2
What if you got the above distribution, but C was NOT the correct response? What might you do then?

Well, perhaps you expected them all to get it wrong – maybe C is a really tempting distractor. If so, you might have them turn to their neighbors, but throw them a hint (like asking them to consider a certain situation, or think about a certain idea). If you’re surprised that they’re all getting it wrong, you might ask a few students to explain their answers, so you can see if there is a problem with the interpretation that you need to clear up.


Scenario #3

What if you get this distribution after the first vote?

Graph

You might have good luck having students turn and talk to their neighbor on this – but I’d say that a distribution like this suggests that the students are just guessing. Is the question confusing? Are they not awake? Are they missing some key piece? You might fish around to see if you need to give them some more information.

Scenario #4

Lastly, what about this distribution, after the first vote?

Graph

This is a beautiful distribution – students are strongly drawn by two of your choices, and they’ll have good productive arguments. Have them turn and talk to their neighbors, this is a picture-perfect opportunity for peer instruction.

Note that I don’t show students the distribution before I have them talk to their neighbors – I want the mystery to be maintained to spark that discussion. The only exception is if (a) I have a distribution which is pretty near 50/50, so students realize there is really split opinion I the room, or (b) if I’m using a survey/discussion question, where I feel that seeing the diversity of opinion in the room will spark student discussion.

Images courtesy of Peter Newbury and Cynthia Heiner.

This is reposted from my article on the i>clicker blog.

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 0 comments }

New meta-analysis: Active learning improves student performance

by Stephanie Chasteen on March 27, 2015

It’s not quite so new anymore, but still exciting!

While we have more and more data that active learning techniques improve student learning, this field has been sorely needing a systematic review of the evidence on active learning. Recently, a crackerjack team of education researchers stepped up to the plate with just what I’ve been looking for; a lovely meta-analysis that highlights the impact of active learning – across disciplines, classrooms, and study designs (Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, and Wenderoth, PNAS, 111(23), 2014).

A meta-analysis is a study in which other studies (rather than people) are themselves the object of research. For this meta-analysis, 225 studies were chosen
to determine whether active learning improves exam scores and/or lowers failure rates. Active learning was defined (after extensive analysis) as follows:

“Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It also emphasizes higher-order thinking and often involves group work.”

All the studies contrasted traditional lecturing with some kind of active learning intervention, which occurred within class or recitation time, in any STEM field. These studies analyzed the effects on student learning on exams, concept inventories, and other types of outcomes. The power of a meta-analysis is to be able to coalesce such diverse outcome data into a single measure – the average effect size. In this case, the effect size is a weighted average of the differences between traditional and active-learning classrooms, adjusted to a common scale.

Here are the core findings of the study:

  1. Student performance on exams increases by 0.47 standard deviations in active learning classrooms – roughly equivalent to 0.3 point increase in final grade.
  2. Students in traditional classrooms are 1.5 times more likely to fail than students in courses with active learning (failure rates were 33.8% and 21.8% respectively).
  3. These results held true across different STEM disciplines, in courses for majors or non-majors, and in lower- and upper-division courses.
  4. Effect sizes were greater for concept inventories than for instructor-written exams.
  5. Active learning had the greatest impact in courses of 50 students or fewer.

Point #4 above is interesting, that the effect sizes were greater for concept inventories than for traditional exams. Concept inventories are tests, developed by researchers, to assess student learning of difficult concepts. They do not contribute to a student’s grade, and allows comparison across courses, instructors, and institutions. The authors note that concept inventories often test higher-order cognitive skills (like critical thinking, rather than memorization). Active learning has been shown to have a greater impact on these higher-order skills, which may result in the higher gains on concept inventories. Additionally, instructor exams are likely to have variable levels of reliability and validity – in short, they may or may not measure the student learning that the instructor hopes that they will.

But what about the fact that we tend to publish studies that show an effect, and not publish null results? The authors took that into consideration, and found that an additional 114 studies showing no effect would be needed in order to erase the results on student performance; and an additional 438 studies would be needed to erase the results on failure rate. Also, this effect size is likely an under-estimate due to the fact that active-learning classrooms retain more students (including underperforming students) than do traditional ones. Lastly, as the authors point out, the effect size they calculated nearly matches that of two other meta-analyses.

These results are very compelling, but the authors do note that it’s hard to know if all faculty would achieve these results if they were forced to use active-learning techniques. The volunteer faculty in this study may be more motivated, or more skilled in such techniques. Also, most studies do not report randomized trials, which does reduce the robustness of the results.

Some highlights of the authors’ discussion of the results:

  • If education were a randomized medical trial, such effect sizes on failure rate would be halted for ethical reasons
  • The failure rates cited in the study convert to over US $3,500,000 in lost tuition dollars, which could be saved through the use of active learning techniques
  • Retaining students in STEM disciplines would help meet our national “pipeline” problem – which is often impacted by low passing rates and low grades in STEM courses

So, gosh, it’s hard to justify not using active-learning techniques in class!

Resources:

This is a repost of my article on the i>clicker blog. 

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 0 comments }

Tutorials in Introductory Physics at CU

March 24, 2015

I just finished a short video on the use of Tutorials in Introductory Physics at the University of Colorado Boulder, and wanted to share it with you all.  It gives a good overview of Tutorials and why you would want to use them. You can find out more about Tutorials here. Here is a link […]

Read the full article →

Using clickers in small classes

March 14, 2015

As more instructors are trying clickers and peer instruction in their courses, I get more questions about how to use them in small classes. I’d like to share a few things I’ve learned through talking with faculty who teach courses of various sizes. The first question I ask is, “what do you mean by small?” […]

Read the full article →

Student motivation to engage with clicker questions

February 27, 2015

I’ve been doing a lot of reading in the educational psychology literature lately, to better understand what the learning sciences has to tell us about student motivation – and how that might relate to what we should do as instructors to motivate students to engage in clicker questions. I wanted to share what I’ve found […]

Read the full article →

Learn the latest advances in physics education… from your living room

January 22, 2015

I’m excited to announce that the New Faculty Workshop videos are online! https://www.physport.org/nfw This is a project that I helped with, doing the filming and editing of the presentations.  For those of you who aren’t familiar with them, the Workshop for New Faculty in Physics and Astronomy is a 3-day workshop for new faculty in physics and […]

Read the full article →

Videos on scientific teaching

January 9, 2015

I wanted to make a pitch for a very nice set of videos on research-based teaching methods:  the  iBiology Scientific Teaching Series.  This is a series of videos about Active Learning in undergraduate biology education, but is applicable across STEM.  They are looking to publicize their videos, and get feedback! From the producers:   The videos include […]

Read the full article →

Feedback codes: Giving student feedback while maintaining sanity

January 5, 2015

One of the most important things in learning is timely, targeted feedback.  What exactly does that mean?  It means that in order to learn to do something well, we need someone to tell us… Specifically, what we can do to improve Soon after we’ve completed the task. Unfortunately, most feedback that students receive is too general […]

Read the full article →

Learning, and assessing, collaboratively: Group Exams

December 29, 2014

I am one of many who are convinced that people learn better in collaboration with others.  However, there’s always this somewhat disturbing schizophrenia when it comes to assessment — we spend all this time emphasizing group work and collaboration, but come exam time — it’s everyone for him or herself. So I was very excited […]

Read the full article →

Free webinar, December 11th: ClickerStarter

December 5, 2014

I’m giving another free webinar for i>clicker this coming Thursday, December 11th, at 10 am ET (7 am PT).  This is called “ClickerStarter for College Faculty” and is intended as a quick primer on the effective use of clickers for those who want an overview of the benefits and uses of clickers. Have you heard […]

Read the full article →

Clicker Q&A

December 4, 2014

As some teachers are just getting things rolling with clickers and peer instruction for the Spring, I thought I would share some questions that faculty have asked me about clickers and peer instruction. This is something I’ve added recently to my workshops, and am really liking it – I ask participants to share their questions in advance, […]

Read the full article →

Why NOT to grade clicker questions for correctness

November 15, 2014

One thing that faculty really struggle with is whether or not, and how much, to give students credit for their clicker question answers. You want to give students some incentive to participate, but grading opens a whole can of worms. One of my faculty workshop participants explained the dilemma very astutely: “If I do not […]

Read the full article →

Measuring and improving students’ engagement

November 2, 2014

I’ve been working over the last year or so to better understand how to promote student buy-in to interactive techniques such as clickers and group work.  That work resulted in a set of resources on how to “frame” students’ roles in the class, especially in the first week. Now I’ve been delving deeper into this […]

Read the full article →

What is effective feedback? And how do clickers provide it?

October 2, 2014

Another re-post from my work on the iclicker blog. Last time I wrote about how clicker questions fit into a theoretical framework of assessment, and some considerations for aligning your clicker questions with your goals for your course. This week I want to review some of the literature on what features and kinds of feedback are most […]

Read the full article →

Backwards design: Where clicker questions fit into a framework of assessment

September 14, 2014

This is a repost of my work on the iclicker blog.   Lately, I’ve been thinking about the purpose and approach that we take in various forms of assessment. Today I’d like to step back into a little bit of theory-land, and consider a broader framework of assessment, and the ways that clickers fit into […]

Read the full article →

Using clickers in social sciences and humanities: No-one-right answer questions

September 4, 2014

This is a re-post from my work on the iclicker blog. There are lots of different types of clicker questions you can draw from (see last post for some examples), but there’s a clear distinction between two types of questions: Questions that have a right answer vs. Questions that don’t have a right answer Questions that […]

Read the full article →

Opening your eyes to new types of clicker questions

August 25, 2014

This is a re-post from material that I’ve shared on the iClicker Blog. One of the best things that I think you can do to get fresh ideas for clicker questions is, simply, to look at lots of different types of questions. One of the things that I have enjoyed the most about giving workshops […]

Read the full article →

FTEP workshops on learning goals and clickers

August 12, 2014

I am giving a set of three workshops on learning goals and clickers at the University of Colorado; here are the slides and handouts for participants to download.  Please let me know if you have any problems with these or are looking for something that’s not here.  (I gave a similar set of workshops at […]

Read the full article →

Spreading reform – beyond development and dissemination (Raina Katri, #aaptsm14)

August 11, 2014

I’m catching up on some blog posts from the AAPT meeting.   I have to say, it’s nice to blog again, and I hope to make some time for it in the future! Writing a grant?   One effort that I wanted to make sure that more people know about is the Increase the Impact project […]

Read the full article →

Lessons learned from 8 years of institutional transformation (#aaptsm14)

August 7, 2014

I was so busy blogging about everybody else’s presentations that I haven’t had a chance to write about my own talk at AAPT!  I’ve been working madly for the past few months to pull together a monstrosity of data on the outcomes and lessons learned from our work in the Science Education Initiative at the University […]

Read the full article →