Why NOT to grade clicker questions for correctness

by Stephanie Chasteen on November 15, 2014

One thing that faculty really struggle with is whether or not, and how much, to give students credit for their clicker question answers. You want to give students some incentive to participate, but grading opens a whole can of worms. One of my faculty workshop participants explained the dilemma very astutely:

“If I do not assign “grades” or “scores” to clicker activities, many students will not participate in the clicker activities. Only the motivated students participate and the “feedback” I get from the responses are skewed towards high-performing students. Consequently, my goal of reaching poorly performing students becomes thwarted.

If I assign “grades” or “scores” to clicker activities then I worry about students operating multiple clickers so their friends will get good marks without having to attend class. I think this would be frustrating to students who “play by the rules” and are still not doing well in the class.

If I back off a bit from “grades” and assign “points for participation” to clicker activities, in addition to the multiple clicker concern many students will click random responses just to “prove” to me they are participating in the discussions. In this case the real-time feedback is useless to me. Occasionally, it turns into a game and a few students intentionally utilize their clickers in a way that brings about the biggest reaction (laughter) from the class. I think this would be frustrating to students who really want to learn.”

That comment outlines three different ways to think about grading: No credit, credit for correctness, and credit for participation.

Points affect student conversations

So, how do we think about the best way to grade clicker questions? First, I want to share the results from a very pertinent study from 2006, “The effect of grading incentive on student discourse in Peer Instruction,” (James, Am. J. Phys., 74(8), 2006). This researcher did something that is really challenging to do, both in terms of time and logistics – he actually listened in to a wide variety of clicker discussions in introductory astronomy courses at his institution. He categorized student discussion as to what was happening, such as whether they were stating their answer, posing a question or idea, disagreeing with their partners’ idea, etc.

In one class, the instructor’s grading scheme emphasized the right answer; the clicker score was 12.5% of the student grade, and incorrect responses got only 1/3 credit. Note that this scheme is similar to what I see a lot of instructors use. We’ll call this the “correct-answer grading” scheme.

In the other instructor’s class, the clicker score counted for 20% of the course grade, and incorrect and correct responses counted for the same amount. We’ll call this the “participation-credit” scheme. (Note that James calls these the high-stakes and low-stakes classes, but I disagree; they both count pretty heavily towards the student score, but one emphasizes the right answer).

Dr. James found out that in the “correct-answer grading” class, student conversations were much more likely to be dominated by one member, usually the one who was more knowledgeable, and most of the conversation focused on that student’s answer choice. In the participation-credit classroom, however, the conversations were more balanced.

Additionally, in the correct-answer grading class, students were more likely give the same vote as their clicker partner. That suggests that the instructor in that class is getting results that are misleading as to how well the class actually understands the question; students are more likely to vote with what they think might get them points, as opposed to giving the answer that reflects their actual thinking.

I’ll note that this is a small study, and the comparison is between two different instructors and two different sized classes, but the results have been demonstrated in different types of courses in a second paper by the same author. In that paper, additionally, they found that high-stakes classrooms led to more passive conversational styles.

So, if we give a lot of points for correctness, it’s likely to shut down student conversation, and mislead us as to the level of student understanding.

Points affect student motivation

Another thing to consider is that extrinsic rewards (i.e., those that arise from something outside of yourself) are less likely to lead to intrinsic motivation.  I’ve been reading some educational psychology, and apparently they warn strongly against giving points for students to engage in something, as they’re less likely to personally value the activity.  However, they do recommend giving points or other rewards if the teacher thinks that students are not likely to engage in the activity otherwise.  For some of our students, that may be the case, but I’d argue it’s hard to know for sure without trying it.

So, how do we encourage students to participate? I personally feel that a combination of intrinsic and extrinsic motivation might work best. Extrinsic motivation is the sense that you are doing something for some external reward (e.g., points). Intrinsic motivation is the internal sense that this is something that is important for your own learning. See my previous post on Getting students on-board with clickers for some ideas on creating buy-in. I think this can address some of the concerns about students playing a game with their clickers, and not really engaging with the questions. If the questions are interesting, and they understand that the discussions matter for their learning, and there is a culture of participation in the class, then most students should engage.

A possible compromise

What we do at Colorado is to offer, for example, 2 points for participation, and 1 point for correctness, so that there is a little sense of accomplishment for getting the right answer. But then the clicker questions only count as extra credit, offsetting a poor homework or exam score. Some instructors choose to make it more like 10% of their course grade, but I’m a bit nervous about this approach for the reasons discussed above.

In the end, you probably want to choose a grading scheme that works well for your particular pedagogical approach. For example, in our physics courses, it’s very important to us that students deeply engage in conversation about these difficult physics concepts, so we have this mixed correctness/participation scoring system which works well for that. I have another physics colleague, however, who strongly believes that students should only be intrinsically motivated, and he has developed a pedagogical approach that repeatedly asks students to examine their own learning. He’s a master of this, and I think most of us would aspire to that approach, but it might not work for all of us, or not right at first. Really, this is all about student motivation; I highly recommend this short white paper summarizing the research on student motivation.

But one thing that is clear to me is that it would likely be quite detrimental to only provide points for correctness.

Please weigh in on the comments; what types of grading schemes have you seen work?

(This is a repost from an article I wrote for the iClicker blog).

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 1 comment }

Measuring and improving students’ engagement

by Stephanie Chasteen on November 2, 2014

I’ve been working over the last year or so to better understand how to promote student buy-in to interactive techniques such as clickers and group work.  That work resulted in a set of resources on how to “frame” students’ roles in the class, especially in the first week.

Now I’ve been delving deeper into this project — having gotten a little bit of money from our Chancellor for this research — and I’m finding a lot of interesting ideas in the educational psychology and learning sciences literature that is very informative.  I’m reframing my topic as “productive engagement” — rather than buy-in.   I don’t just want students to not be resistant;  I also want them to actively engage with the techniques and understand their value.

I’m eventually planning to pull all this together into:

  • A survey so instructors can assess the level of student engagement in their course (and if there are problems, where those problems are)
  • A set of recommendations for instructors to consider if they do have particular problems with student engagement, based on the literature and on actual activities that I’ve collected from master teachers

I recently wrote up some of my thoughts on how this topic relates to existing literature, so I thought I’d share it here.  Comments and recommendations welcome!  If you want to see a copy of my pilot survey, please let me know.

——

 

Defining productive engagement

Productive engagement describes a positive attitude and behavior in a classroom such that students

  1. participate in activities,
  2.  value these activities, and
  3. are emotionally engaged in these activities.

Productive engagement is related to, but not equivalent to:   (a) motivation (this is an antecedent to engagement), (b) satisfaction with the class (this is a broader idea than engagement), (c) comfort and safety (some active learning techniques create frustration or discomfort, even among students who are actively engaged).

How did I arrive at these three dimensions?   “School engagement” has been defined in the K12 school literature as having behavioral, emotional, and cognitive components.[1] While “school engagement” is broader than the “productive engagement” construct (including aspects such as whether the students follow the rules of the classroom, or engage in extra-curricular activities), it is still a very informative area with a broad research base. A review article by Fredericks [1] describes the current state of the engagement literature, and I draw from that work here, which strongly influenced my construct map.

Behavioral engagement is equivalent to participation – students’ conduct, adherence to norms, absence of disruptive behaviors, effort, persistence, concentration, and contributions to class discussion. Behavioral engagement is typically measured through a scale of conduct, persistence, and/or participation.

Emotional engagement refers to students’ emotional reaction to school and the teacher: interest, boredom, happiness, sadness, anxiety, feeling of belonging, value and success. Fredericks points out that the literature has not focused on what the source of the emotional engagement might be – is it situational or personal interest? Fredericks indicates that ideas of value – interest, identity, importance of the activity, and cost of engaging – are typically included within emotional engagement. Emotional engagement is typically measured through identifying students’ emotions, work orientation, and interest.

Lastly, cognitive engagement focuses on students’ investment in learning, and has strong overlap with various constructs in the motivational literature, such as intrinsic/extrinsic motivation, and self-regulation. Cognitive engagement is typically measured by studying students’ ability to use flexible problem-solving techniques, their preference for hard work or for independent work styles, and their ability to manage their effort.

Another area that has informed my thinking is that of Productive Disciplinary Engagement (PDE)[2]. Productive disciplinary engagement is described as students participating in an activity, responding to one another, using goal-oriented activities that help them demonstrate the skills and understanding of a discipline. Admittedly, my exposure to PDE came late in my definition of my construct, and further exploration of this literature will be part of the continuation of my project.

The antecedents of productive engagement

One particular challenge in creating this construct map was to clearly separate the components of productive engagement from its’ antecedents. There is much advice in the educational psychology literature on how to create motivating, engaging classroom environments[2,3]. This literature on motivation has such a strong overlap with the literature on engagement that it has sometimes been difficulty to identify whether I’m measuring something different from motivation. Motivation is defined as the “psychological processes that direct and sustain students’ behavior toward learning.” (Moreno, pp 328-329). Given that definition, I would argue that motivation is the antecedent to engagement, rather than engagement itself.

However, I have found the motivation literature to be critical in helping me to define engagement, as motivation defines the inputs that result in the outcomes that I am interested in. Thus, I review this literature briefly here, as it has informed my construct map.

Students feel motivated when they feel capable, when they understand what actions will lead to success, when they understand the purpose of the learning activity, when they see the activity as having value and interest, when they have positive emotions about the activity, and when they deal effectively with obstacles (see Boekaerts [3] for a useful review). Textbook treatments of motivation in school (e.g., Moreno) break motivation into behaviorist, cognitive, and sociocultural components. Behaviorist theories recognize that we seek rewards and avoid punishments. Cognitive theories focus on our thoughts, beliefs, expectations and attitudes: This includes interest, goal, and self-determination theories. Sociocognitive theories combine the cognitive and behaviorist approaches: Students’ thoughts and attitudes combine with the learning environment to give rise to their motivational stance: This includes expectancy/value theory, attribution theory, and self-efficacy. Additionally, students are more motivated to engage when they experience positive emotions towards learning activities (and conversely, less likely to attend to learning when they experience negative emotions)[2] : Positive emotions are often described as fulfilling the psychological needs of competence, autonomy, and relatedness.

The Productive Disciplinary Engagement literature has also focused on the antecedents to creating PDE in the classroom, and the seminal work in this area7 postulates four main principles for fostering PDE: Problematizing subject matter (encouraging students’ intellectual contributions), giving students authority to address such problems, holding students accountable to others and to shared disciplinary norms, and providing students with relevant resources. These suggestions have clear connection to the sociocognitive motivational literature, such as self-efficacy and creating expectancies among students.

The final construct

Thus, I would like to justify the three dimensions of my construct, with this literature in mind.

  1. Participation is equivalent to the behavioral engagement in the School Engagement literature, and is an observable outcome of both PDE and motivation.   The two remaining components of my construct both fall within the area of “emotional engagement” as described in the School Engagement literature[1]
  2. Value is an aspect of engagement identified within the engagement literature, and is generated through thoughts, beliefs and expectations as described by cognitive theories of motivation9. Thus, it is an emotional aspect of engagement, with largely cognitive antecedents.
  3. Emotional engagement are the other aspects of engagement that relate to the students’ experience with the task and with the teacher. Thus, this is related to the non-value aspects of emotional engagement defined in the school engagement literature[1], and the whether the psychological needs of the students are being met. Emotional engagement is largely impacted by factors identified in the sociocognitive theories of motivation. I note that this third catetegory is perhaps the least well-defined, and may be mixing antecedents of engagement with actual engagement. Results of the pilot study will help discern the utility of this measure.

I did not include measures of cognitive engagement (as defined in Fredericks) – intrinsic/extrinsic motivation and self-regulation – as these are person-side characteristics that are less likely to be affected by a single classroom experience. The survey also does not use the sociocultural or sociocognitive theories of learning in great depth – mostly in an attempt to restrict the scope of the project.

Construct Map

Here I present the final construct map.

PARTICIPATION

Participation is a measure of behavioral engagement, which is impacted by various motivational factors. It ranges from the most enthusiastic, participatory students who attend deeply to the tasks, to students who are more passive and give up easily, to those who are actively resistant and disruptive.

 High engagement

Respondents
  • Actively participate, or attempt to actively participate
  • Are animated and engaged, attentive
  • Participate readily
  • Are persistent
  • Are passive during activities
  • May require prompting to participate
  • Are not persistent
  •  Avoid participation (e.g., engage in other activities, such as texting)
  • May be disruptive to peers
  • Do not respond to prompting
  • Don’t even try

High resistance

 

 

VALUE

Value is a measure of the degree to which the student understands and accepts the rationale for the activity, and feels that they are useful to his/her learning. This is mainly impacted by cognitive factors, and is what people usually mean by “buy-in” (the original intent of this study). The construct ranges from those who can fully articulate the value of the activities and think that the activities help their learning, to those are either conflicted or neutral (“whatever”) about the value of the activities, down to those who are fairly resistant to doing such activities because they are not seen as valuable. This area of engagement is likely to be strongly impacted by the quality of the task itself, as well as the effectiveness of the instructor’s facilitation.

Respondents
  • Recognize the value of the activities
  • Feel the time used for activities is beneficial
  •  Are not sure that they recognize the value of the activity
  • Are unsure whether the time on the activities is beneficial at all
  •  Disagree with the rationale for the activity
  • Feel that the time spent on the activities could be better spent doing other things (e.g., lecture)

 

EMOTIONAL ENGAGEMENT

Emotional engagement is the degree to which the student feels positively about the task, instructor, and classroom environment. Is the student experiencing positive emotions, feeling connected to classmates, building their confidence, and feeling in control of their learning? Students may be relatively neutral on such measures, or more negative and resistant.

Respondents
  • Show positive affect (autonomy, competence, relatedness)
  • Feel positively towards the instructor and classroom environment
  •  Show neutral affect
  • Feel neutral towards the instructor and classroom environment
  •  Show negative affect during activities
  • Feel negatively towards the instructor and classroom environment

 

References

[1] J.A. Fredericks, P. C. Blumenfeld and Alison H. Paris, “School Engagement: Potential of the Concept, State of the Evidence,” Review of Educational Research, 74:59 (2004).

[2] R. A. Engle and F. R. Conant, “Guiding Principles for Fostering Productive Disciplinary Engagement: Explaining an Emergent Argument in a Community of Learners Classroom,” Cognition and Instruction, 20 (4), 399-483 (2002).

[3] Boekaerts, M., “The crucial role of motivation and emotion in classroom learning,” in H. Dumont, D. Istance and F. Benavides (eds.), “The Nature of Learning: Using Research to Inspire Practice,” Organisation for Economic Co-Operation and Development (2010).

[4] R. Moreno, “Educational Psychology,” J. Wiley & Sons, Inc.: New Jersey (2010).

 

Post to Twitter Post to Yahoo Buzz Post to Delicious Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon

{ 1 comment }

What is effective feedback? And how do clickers provide it?

October 2, 2014

Another re-post from my work on the iclicker blog. Last time I wrote about how clicker questions fit into a theoretical framework of assessment, and some considerations for aligning your clicker questions with your goals for your course. This week I want to review some of the literature on what features and kinds of feedback are most […]

Read the full article →

Backwards design: Where clicker questions fit into a framework of assessment

September 14, 2014

This is a repost of my work on the iclicker blog.   Lately, I’ve been thinking about the purpose and approach that we take in various forms of assessment. Today I’d like to step back into a little bit of theory-land, and consider a broader framework of assessment, and the ways that clickers fit into […]

Read the full article →

Using clickers in social sciences and humanities: No-one-right answer questions

September 4, 2014

This is a re-post from my work on the iclicker blog. There are lots of different types of clicker questions you can draw from (see last post for some examples), but there’s a clear distinction between two types of questions: Questions that have a right answer vs. Questions that don’t have a right answer Questions that […]

Read the full article →

Opening your eyes to new types of clicker questions

August 25, 2014

This is a re-post from material that I’ve shared on the iClicker Blog. One of the best things that I think you can do to get fresh ideas for clicker questions is, simply, to look at lots of different types of questions. One of the things that I have enjoyed the most about giving workshops […]

Read the full article →

FTEP workshops on learning goals and clickers

August 12, 2014

I am giving a set of three workshops on learning goals and clickers at the University of Colorado; here are the slides and handouts for participants to download.  Please let me know if you have any problems with these or are looking for something that’s not here.  (I gave a similar set of workshops at […]

Read the full article →

Spreading reform – beyond development and dissemination (Raina Katri, #aaptsm14)

August 11, 2014

I’m catching up on some blog posts from the AAPT meeting.   I have to say, it’s nice to blog again, and I hope to make some time for it in the future! Writing a grant?   One effort that I wanted to make sure that more people know about is the Increase the Impact project […]

Read the full article →

Lessons learned from 8 years of institutional transformation (#aaptsm14)

August 7, 2014

I was so busy blogging about everybody else’s presentations that I haven’t had a chance to write about my own talk at AAPT!  I’ve been working madly for the past few months to pull together a monstrosity of data on the outcomes and lessons learned from our work in the Science Education Initiative at the University […]

Read the full article →

Plenary at #perc2014: Carl Wieman and the future of PER

August 1, 2014

My mentor Carl Wieman was called upon to synthesize some of the main themes of the physics education research conference (PERC) this year.  Here are some of the things he discussed.  Note, he had a hard job, to try to draw some meaning from a lively conference with a short preparation time! Talking to some of […]

Read the full article →

Apples vs Oranges: MOOCs vs Brick-and-Mortar course (Mike Dubson #aaptsm14 #perc2014)

July 30, 2014

The PERC bridging session was kicked off by my colleague Mike Dubson, regarding an experiment we ran at Colorado with a MOOC vs traditional university courses. MOOCs have been hailed as revolutionary educational technology. What other revolutionary technologies have affected education? The printing press, the gasoline engine (allowing us to eliminate one-room schoolhouses). But there […]

Read the full article →

Peer Instruction and Student Preparation (#AAPTsm14)

July 29, 2014

I write a lot about the effective use of clickers and peer instruction, so I was excited at AAPT to see a talk with some interesting results on this educational technique.  Judy Vondruska (South Dakota State University) spoke about the “Influence of previous subject experience on interactions during peer instruction.”  She was using clickers like […]

Read the full article →

The gap between knowledge and practice (#AAPTsm14)

July 28, 2014

I’m at the American Association of Physics Teachers conference this week, and will be liveblogging from a few sessions. One my main interests is in how to support successful uptake of innovative educational techniques.  My talk on Wednesday will focus on some of the outcomes from the Science Education Initiative at Colorado, and lessons learned […]

Read the full article →

“Because the research tells me so”: Best practices in facilitating peer instruction

June 14, 2014

This is another repost from an article I wrote on the great i>clicker blog. — As a follow-up to last month’s post, on research showing that peer discussion helps students learn I’d like to share a variety of the messages that are coming out of the research on clickers and peer instruction – with particularly pertinent implications […]

Read the full article →

Do students learn by talking to each other?

May 30, 2014

Here is another re-post from an article I wrote on the i>clicker blog. —- This month I’d like to highlight a study which I think is crucially important in cementing the value behind peer instruction. It’s not new work anymore, but it so elegantly answers a key question – “do students learn by talking to […]

Read the full article →

New videos on undergraduate biology instruction

May 20, 2014

I’m happy to share the news about a new set of Creative Commons videos on undergraduate instruction — the Scientific Teaching series from iBiology:  http://www.ibiology.org/scientific-teaching.html.  These videos are all Creative Commons licensed so you can use them in your workshops, etc.  They have a newsletter you can sign up on to find out about new releases, […]

Read the full article →

How can you make a “good” clicker question GREAT?

May 16, 2014

This is another re-post of a blog post at the i>clicker blog. —- Sometimes we can be lucky enough to have access to a great set of clicker questions (see, for example, the list at STEMclickers.colorado.edu). But often a good set of questions for our course doesn’t exist, or another instructor’s questions don’t quite fit. Or, […]

Read the full article →

Getting students on-board with clickers and peer discussion

May 2, 2014

I have been blogging recently for the i>clicker blog (which has a lot of great articles on clicker use).  With their permission, I am reposting some of my articles here. —- I work a lot with faculty who are considering using clickers and peer instruction. Many faculty confide in me that they are concerned that students […]

Read the full article →

Free #clicker webinar: Facilitating Peer Instruction Effectively

January 25, 2014

I’m giving two free webinars this coming Wednesday on the use of clickers in the classroom to promote student discussion.  I’ve given a lot of these and they’ve always been very well received, come join us, it should be a good time!  Each is one hour long. 11 am PT / 2pm ET:  Recording  (I […]

Read the full article →

George Washington U. clicker workshop – Dec 10th

December 10, 2013

I am giving a workshop at George Washington University on the effective use of clickers, along with my wonderful colleague Stefanie Mollborn from Sociology.  This is a four-part half-day workshop, including information on facilitation, question writing, and tips for success. Do you want to learn how to use clickers – or any student voting technique […]

Read the full article →