What is effective feedback? And how do clickers provide it?

by Stephanie Chasteen on October 2, 2014

Another re-post from my work on the iclicker blog.

Last time I wrote about how clicker questions fit into a theoretical framework of assessment, and some considerations for aligning your clicker questions with your goals for your course. This week I want to review some of the literature on what features and kinds of feedback are most effective in helping students learn – and how you might use clickers to take best advantage of those features of effective feedback.

The features of feedback that I discuss in this post can be found in this nice two-page summary from the University of Colorado – Assessments that Support Student Learning. That article summarizes some points from another review article, “Conditions Under Which Assessment Supports Student Learning,” by Gibbs and Simpson.

Here is a list of the types of assessments that help students learn – and how they can be integrated with your use of clickers.

1. Assessments must be focused on the key aspects of the course

In other words, don’t ask clicker questions (or any questions) that are superfluous to your core learning goals, and don’t ask questions that only test basic recall. After all, you’re spending class time on this activity, and you also don’t want to give students the wrong impression about your expectations. For more on this, see last month’s post on backwards design. (Note: Some basic recall questions are fine, to build student understanding and confidence, but your questions should include a mix of levels.)

2. Assessments should be given frequently

Too often, students don’t find out where their weaknesses are until the exam, which covers a wide swath of material. Frequent assessments, instead, let students (and instructors) continually gauge their progress and weaknesses. Clicker questions are a great way to give frequent feedback to students. I advocate asking several clicker questions each lecture, so that students are continually given opportunities to test themselves and apply the new knowledge.

3. Feedback should be frequent and timely

Feedback is different from assessments – in the example of clicker questions, “assessment” is the clicker question itself, and the “feedback” is the discussion and display of the answer after students have voted. In other words, how did they do on that assessment? Too often, students complete a weekly homework, and don’t receive their scores until several days later. That feedback isn’t very frequent, and it’s so latStudents in classroome that it no longer matters as much to the student. Again, clickers are a great way to give students this frequent, timely feedback – since students are continually learning how their thinking and performance relates to the rest of the class or to an expert understanding. This is why it’s really important to discuss the wrong answers and the right answers to clicker questions during your discussion – students need feedback on where their thinking went astray, as well as confirmation of the correct answer.

4. Feedback should focus on student learning (rather than characteristics of the student)

When feedback focuses on what a student needs to do differently (e.g., “you’re not very good at doing integrals,”) they tend to internalize this information and feel somewhat helpless to change. On the other hand, when students receive feedback that focuses on what they can do differently, (e.g. “it’s important to identify what’s changing before you write your integral expression,”) then this is more empowering. For more information about this idea, read aboutFixed vs. Growth Mindset. Clickers, again, are useful in this way – since they are anonymous, all feedback to students focuses on the rationale for the correct answer, rather than being directed at them personally.

5. Feedback should be specific to the student

Generalized feedback isn’t that helpful to any of us. Rather, it’s most helpful to get feedback that is specific to our particular difficulties or misunderstandings. When using clickers, by encouraging students to discuss with their peers, you can make sure that students get feedback that helps them with their particular thought processes, at a level that is meaningful. You can also encourage students to reflect about their own difficulties, and provide themselves self-feedback, through the way that you facilitate the discussion at the end of the question.

6. Feedback should address small chunks of material

Again, the typical cycle of doing homework on a week’s worth of material, or an exam on a month’s worth of material, makes it very difficult for students to self-correct and improve. There is just too much material being assessed at once. Clicker questions are a great way to “chunk” your lecture into manageable portions. For example, you might lecture for 10 minutes at a time, and then ask a clicker question to help students wrestle with those ideas.

7. Feedback should provide guidance for future efforts, and allow the student to act on the feedback

This is a piece that is too often missing in modern education. As professionals, we operate in a cycle of feedback, iteration, and improvement – I write a paper, I show it to colleagues, they give me feedback on how to make it better, I go back and revise it, and show it to them again. I’ve gotten feedback that is very directed to my needs, and a clear opportunity to incorporate that feedback. This is often missing for students, who turn in a homework assignment and then get feedback that they are supposed to incorporate for next time (but they rarely do). Instead, you need to directly support students in incorporating feedback. So, for example, you might ask a clicker question that you know is a particularly challenging idea, and follow it with a second similar question, urging students to use what they learned the first time to improve their performance the second time. Or you might simply remind students of the lessons that they’ve learned from each clicker question and how it might apply to future work, indicating that this is an important part of their classroom experience.

So, in many ways, clickers are made to give students immediate, useful feedback – but there are some practices that you can take advantage of to make sure that this feedback is even more supportive of student learning.

Image courtesy of the PhET Interactive Simulations at the University of Colorado Boulder.

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 1 comment }

This is a repost of my work on the iclicker blog.

 

Lately, I’ve been thinking about the purpose and approach that we take in various forms of assessment. Today I’d like to step back into a little bit of theory-land, and consider a broader framework of assessment, and the ways that clickers fit into that broader framework.

Point 1: There is a disconnect between how we see a course, and how our students see the course.

As instructors, we have strongly-held values about what our course is about, and why it’s important. But we don’t always teach for what we value. Sometimes we cram in content because it’s interesting, even though it doesn’t serve our broader purpose. Sometimes we think that the ultimate point of our instruction is clear through our activities and assignments – but that is not always the case.

Students operate in a different reality from us. Because of this, and the fact that our course goals are not necessarily as transparent as they could be, it is critical to be explicit about our expectations for students, and the connections between our instruction and the course goals.

What does this have to do with clickers? Hang on, I’ll get there.

Point 2: Designing a course around core objectives provides clarity – for instructors and students

One way to address the mismatch that I talk about above is to explicitly design our course around learning goals that are identified in advance. This isn’t how we usually design our course – a common approach is that we decide what goes into our course by finding some activities that we think will be fun and interesting to students, and building the course around those. Instead, consider starting with your goals, in an approach called “Backwards Design”. Backwards Design isn’t a prescribed set of steps, but rather a philosophy of education. First, we define explicit learning outcomes, such as “Students will be able to recognize equilibrium points on a graph.” You can read more about learning goals here. Then, determine how you will assess that outcome, such as having students use graphs to predict the behavior of objects. Then, LAST, you identify the instructional approaches that will help students be successful on those assessments – perhaps, working in groups on a tutorial on equilibrium, and getting more practice on their homework.
Backward Design: Start at the End
Backwards Design & Alignment: An example

Point 3: Frequent assessment gives powerful feedback as to whether students are achieving your goals

So, now that you have your clearly defined objectives, how do you know whether students are making adequate progress? How do your students know whether they’re on track? Assessment is a critical piece of instruction – it’s not just about finding out whether students got the message at the end of the day; assessment is about continually evaluating and giving feedback to students on where they stand. In fact…

Rapid, targeted feedback is perhaps
THE most important element of learning.

Without feedback, we can’t improve. This kind of feedback is achieved through the use of“formative assessment” – or “Assessments that provide information to students and teachers that is used to improve teaching and learning” (NRC, 2001). Compared to exams and other end-of-instruction exams (termed “summative assessment”, formative assessments are low-stakes, and aimed at helping students improve, and helping teachers appropriately target instruction, before that exam. Another way of looking at it is that formative assessment is “When the cook tastes the soup,” and summative assessment is “When the customer tastes the soup.”

Point 4: Clickers are an excellent form of formative assessment, to student and teacher

So, here’s where we get to clickers. Clickers and peer instruction fit perfectly into the formative assessment model – clicker questions are low-stakes, rapid, ongoing opportunities for students to find out how they’re doing, for instructors to see how the class is going, and for both students and instructors to redirect their efforts based on those results.

Point 4: Align your clicker questions with your goals for students

This is important! Too often, we write clicker questions that simply test students’ ability to recall basic information. Those are, after all, the easiest questions to write. But if our goals are that our students be able to evaluate information, or analyze pieces of an equation – we had better be asking those kinds of questions through our use of clickers, for two reasons: (1) to give students practice in achieving our goals, and (2) to appropriately communicate our expectations to students. Otherwise, students may not find out what’s important to you until the exam, when it’s too late. Clickers define for students, continually, what it means to you to “understand” a topic. See my previous post on “Bloomifying Up” your questions for more tips on this.

Backwards Design: Alignment
Photo caption: Align your goals, instruction, and assessments

So, in summary:

  • Write learning goals for your courses
  • Use clickers to assess student achievement of those learning goals
  • Make sure your clicker questions are aligned with your actual expectations for students.

In the next post I’ll write more about how clickers tie into the research on effective forms of feedback.

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 0 comments }

Using clickers in social sciences and humanities: No-one-right answer questions

September 4, 2014

This is a re-post from my work on the iclicker blog. There are lots of different types of clicker questions you can draw from (see last post for some examples), but there’s a clear distinction between two types of questions: Questions that have a right answer vs. Questions that don’t have a right answer Questions that […]

Read the full article →

Opening your eyes to new types of clicker questions

August 25, 2014

This is a re-post from material that I’ve shared on the iClicker Blog. One of the best things that I think you can do to get fresh ideas for clicker questions is, simply, to look at lots of different types of questions. One of the things that I have enjoyed the most about giving workshops […]

Read the full article →

FTEP workshops on learning goals and clickers

August 12, 2014

I am giving a set of three workshops on learning goals and clickers at the University of Colorado; here are the slides and handouts for participants to download.  Please let me know if you have any problems with these or are looking for something that’s not here.  (I gave a similar set of workshops at […]

Read the full article →

Spreading reform – beyond development and dissemination (Raina Katri, #aaptsm14)

August 11, 2014

I’m catching up on some blog posts from the AAPT meeting.   I have to say, it’s nice to blog again, and I hope to make some time for it in the future! Writing a grant?   One effort that I wanted to make sure that more people know about is the Increase the Impact project […]

Read the full article →

Lessons learned from 8 years of institutional transformation (#aaptsm14)

August 7, 2014

I was so busy blogging about everybody else’s presentations that I haven’t had a chance to write about my own talk at AAPT!  I’ve been working madly for the past few months to pull together a monstrosity of data on the outcomes and lessons learned from our work in the Science Education Initiative at the University […]

Read the full article →

Plenary at #perc2014: Carl Wieman and the future of PER

August 1, 2014

My mentor Carl Wieman was called upon to synthesize some of the main themes of the physics education research conference (PERC) this year.  Here are some of the things he discussed.  Note, he had a hard job, to try to draw some meaning from a lively conference with a short preparation time! Talking to some of […]

Read the full article →

Apples vs Oranges: MOOCs vs Brick-and-Mortar course (Mike Dubson #aaptsm14 #perc2014)

July 30, 2014

The PERC bridging session was kicked off by my colleague Mike Dubson, regarding an experiment we ran at Colorado with a MOOC vs traditional university courses. MOOCs have been hailed as revolutionary educational technology. What other revolutionary technologies have affected education? The printing press, the gasoline engine (allowing us to eliminate one-room schoolhouses). But there […]

Read the full article →

Peer Instruction and Student Preparation (#AAPTsm14)

July 29, 2014

I write a lot about the effective use of clickers and peer instruction, so I was excited at AAPT to see a talk with some interesting results on this educational technique.  Judy Vondruska (South Dakota State University) spoke about the “Influence of previous subject experience on interactions during peer instruction.”  She was using clickers like […]

Read the full article →

The gap between knowledge and practice (#AAPTsm14)

July 28, 2014

I’m at the American Association of Physics Teachers conference this week, and will be liveblogging from a few sessions. One my main interests is in how to support successful uptake of innovative educational techniques.  My talk on Wednesday will focus on some of the outcomes from the Science Education Initiative at Colorado, and lessons learned […]

Read the full article →

“Because the research tells me so”: Best practices in facilitating peer instruction

June 14, 2014

This is another repost from an article I wrote on the great i>clicker blog. — As a follow-up to last month’s post, on research showing that peer discussion helps students learn I’d like to share a variety of the messages that are coming out of the research on clickers and peer instruction – with particularly pertinent implications […]

Read the full article →

Do students learn by talking to each other?

May 30, 2014

Here is another re-post from an article I wrote on the i>clicker blog. —- This month I’d like to highlight a study which I think is crucially important in cementing the value behind peer instruction. It’s not new work anymore, but it so elegantly answers a key question – “do students learn by talking to […]

Read the full article →

New videos on undergraduate biology instruction

May 20, 2014

I’m happy to share the news about a new set of Creative Commons videos on undergraduate instruction – the Scientific Teaching series from iBiology:  http://www.ibiology.org/scientific-teaching.html.  These videos are all Creative Commons licensed so you can use them in your workshops, etc.  They have a newsletter you can sign up on to find out about new releases, […]

Read the full article →

How can you make a “good” clicker question GREAT?

May 16, 2014

This is another re-post of a blog post at the i>clicker blog. —- Sometimes we can be lucky enough to have access to a great set of clicker questions (see, for example, the list at STEMclickers.colorado.edu). But often a good set of questions for our course doesn’t exist, or another instructor’s questions don’t quite fit. Or, […]

Read the full article →

Getting students on-board with clickers and peer discussion

May 2, 2014

I have been blogging recently for the i>clicker blog (which has a lot of great articles on clicker use).  With their permission, I am reposting some of my articles here. —- I work a lot with faculty who are considering using clickers and peer instruction. Many faculty confide in me that they are concerned that students […]

Read the full article →

Free #clicker webinar: Facilitating Peer Instruction Effectively

January 25, 2014

I’m giving two free webinars this coming Wednesday on the use of clickers in the classroom to promote student discussion.  I’ve given a lot of these and they’ve always been very well received, come join us, it should be a good time!  Each is one hour long. 11 am PT / 2pm ET:  Recording  (I […]

Read the full article →

George Washington U. clicker workshop – Dec 10th

December 10, 2013

I am giving a workshop at George Washington University on the effective use of clickers, along with my wonderful colleague Stefanie Mollborn from Sociology.  This is a four-part half-day workshop, including information on facilitation, question writing, and tips for success. Do you want to learn how to use clickers – or any student voting technique […]

Read the full article →

Why I donated to PhET for #GivingTuesday

December 3, 2013

When I first came to CU from the Exploratorium — the premiere hands-on, “tinkering” science museum in the world — I was pretty disdainful about the idea of spending a lot of resources creating interactive simulations.  These aren’t hands-on, I thought, they’re fake, they’re missing the point.  Then I got to know the PhET simulations (http://phet.colorado.edu). […]

Read the full article →

PhET is looking for a K12 specialist!

October 30, 2013

I work part-time with the PhET Interactive Simulation project (http://phet.colorado.edu), which many readers are familiar with.  They have a rare position open, focusing on simulation design and use at the K12 level, and I wanted to share with you all!  Please share this announcement with others who might be interested. The online posting can be found […]

Read the full article →