Understanding clicker discussions (#clicker series)

by Stephanie Chasteen on April 5, 2013

To follow up on the last post on the benefits of anonymity in answering clicker questions and Peer Instruction, today I’d like to report on some of the newest research coming out of CU.  Jenny Knight has been co-author on two very nice papers in recent years, reporting results that peer discussion does enhance student understanding, and  the importance of combining peer discussion with instructor explanation on clicker questions.discussion

Along with my colleague Sarah Wise, Jenny has lately been combing through a ton of transcription data from conversations during clicker questions.   There isn’t too much data so far on the actual quality of these conversations — in part because it’s hard to gather and hard to analyze.  James and Willoughby (2011) reported that student conversations often get off-topic or become simplistic when clicker questions are worth a lot of points (“high stakes”).

What are students talking about?  Are their discussions productive?

And how do we define “productive?”  To answer this question, they recorded four tables of students in her junior/senior biology class.  Jenny uses Peer Instruction, where students vote on their own, then discuss with their neighbors if fewer than 70% of the class get the question correct, and then revote.  They coded the conversations on the following dimensions:

  • Argumentation.  (Making a claim, offering reasoning for an idea, making sense of an idea by restating in their own words).  Further coded as:
    • Claim.   (Statement of preference for an answer)
    • Reasoning.  (Explanation for choosing an answer)
    • Question.  (Asking peer for explanation, asking about definitions, etc.)
    • Background (Clarifying what the question is asking, etc.)
    • N/A
  • Social.  (How students challenge or support each others’ ideas, how they ask questions)
  • Participation.  (Turns of talk, dominance)

They found that, surprising to me, “Reasoning” accounted for the largest percent of discussions (39%), followed by claim (31%).  This is great news, and I wonder how it would differ in classes where the instructor uses different facilitation strategies.  An instructor sets certain classroom norms and culture by the way that he/she frames the use of clickers, and I know that Jenny has a particular focus on hearing student arguments and reasoning, and encouraging discussion.

Does higher quality reasoning lead to a correct answer?

OK, so here’s the even more interesting piece.  They also coded the Quality of Reasoning:

  • 0 = no reasoning
  • 1 = one student gave an explanation for their reason
  • 2 = two or more students exchanged reasoning, but did not support with evidence
  • 3 = two or more students exchanged reasoning, and supported with evidence

And what they found is really fascinating:  DISCUSSIONS WITH HIGHER QUALITY REASONING DID NOT NECESSARILY RESULT IN MORE CORRECT ANSWERS.  When examining whether student groups’ revoting was more likely to be correct if they had higher quality reasoning, there was a slight trend such that discussions ranked as a “3” were more likely to result in correct re-votes, but it wasn’t significant.  Of course, this is quite a small sample size (45 discussions at rank “3” and 20 at rank “2”), but still, not what we would have hoped for.

Can instructor behaviors impact discussion?

I love these guys; they set out to answer the very questions I’m interested in.  Jenny and Sarah also had the instructor use different instructors to students, resulting in an “answer-centered” or “reasoning-centered” class, as outlined below:

instructor

They found that, indeed, the instructor cue did have an effect on student reasoning:  The reasoning-centered class resulted in a higher average quality-of-reasoning rank (2.5 versus 2.0), which was statistically significant.  Note that it didn’t affect a variety of other variables; turns of talk, percent correct revote, percent devoted to claims or reasoning, though there were several non-significant trends such that the reasoning-centered classroom resulted in more turns of talk, and a higher percent devoted to reasoning.

Conclusions

Their conclusions are as follows:

  • Different kinds of interactions during discussion can be successful (ie., several formats lead to correct answers)
  • More exchanges of reasoning does not necessarily lead to the correct answer
  • High quality reasoning does not necessarily lead to improved performance.  Sometimes students don’t link their ideas to evidence, yet still get the correct answer
  • Instructor cues to focus on reasoning do increase the quality of student reasoning

This research isn’t yet published, but is in the works, so stay tuned!

Leave a Comment

Previous post:

Next post: