A lot of previous work by Charles Henderson and Melissa Dancy has shown that the “develop and disseminate” model doesn’t work. This is business-as-usual for educational innovators: We develop innovations, share them at conferences and in papers, explain the new method, show data that it works, and disseminate the materials.
This doesn’t work, in large part because:
- This standard approach doesn’t help faculty develop deep knowledge
- Faculty personal experiences are more salient to them than data,
- Situational factors are important in faculty’s decision-making, and
- Many faculty see student deficiencies as a barrier to use innovations (i.e., “Students lack X so I can’t do innovation Y.”
Chandra reported on data that is coming out of a large interview study done with Henderson and Dancy, of 72 physics faculty. She reported on 35 who indicated that they were familiar with Peer Instruction. Interestingly, she had to throw out about half of those users because, even though the researchers had clearly indicated “Peer Instruction as developed by Eric Mazur,” 25% of those faculty used “peer instruction” to mean anything where students worked together one-on-one, and 20% said they knew it referred to a particular strategy, but couldn’t name any aspects of that strategy. So, they ended up with 14 knowledgeable non-users, 6 former users, and 15 current users.
The researchers came up with a set of aspects or features of Peer Instruction (that they also ran by Eric Mazur) such as “adapts instruction to the results,” “answers are not graded,” “students commit to an answer,” “questions are multiple choice,” “questions interspersed through lecture,” etc.
What they found is that “using Peer Instruction” is not a binary yes/no practice. Faculty who indicate that they don’t use PI did use some aspects of PI, as did former users. Some self-defined users also use very few features of PI.
So the researchers binned faculty into “non-users” who use 0 PI features, “mixed users” who use 1-6 features, and “high users” who use 7-9 features. They then looked for themes in what these different users said were the affordances and constraints of using Peer Instruction.
Their main findings are:
- The high use group perceived many affordances
- The low use group perceived many constraints
- For the low use group, the biggest affordance is evidence of effectiveness. The biggest constraints were time, and student deficiencies
- For all groups, the importance of seeing evidence of effectiveness from personal experience was cited as important. Evidence of effectiveness from data was only cited as being used in post-hoc justifications of their instructional decisions.
So what this means for faculty developers is that we should focus on providing powerful experiences with educational innovation that allow faculty to see success for themselves.
Chandra also argued that if we could help instructors shift their mindset from “students can’t” to “students can,” that might help the low-use faculty shift to high-use. As researchers, we might want to work to highlight student successes rather than student difficulties to enable this shift in mindset.
She also highlighted that many faculty care deeply about how well their instruction is working, but do not have methods to determine whether or not it is. Perhaps if we pitched instructional innovation as “ways to find out if your instruction is working,” we could have a larger impact?