Fidelity of Implementation: Measuring how instructional innovations are used

by Stephanie Chasteen on October 25, 2017

I recently came across an illuminating article by the (ever-diverse) Marilyne Stains and her colleague Trisha Vickrey discussing a particularly sticky issue in education research – how do we know if research-tested techniques and curricula are as effective in practice as promised by the original study?

Of course, we don’t – If Professor A at University X gets great results with her students, who knows how Professor B at University Y will fare.  The thorniness is due to the incredible diversity of educational contexts, plus the difficulty in measuring just how any instructional practice is actually used.  I’ve tried to measure this, asking instructors what they did, and didn’t, use from a course approach, and it’s hard to measure with any real accuracy.   This challenge was first highlighted to me by Sam McKagan over 5 years ago as she put together the website PhysPort, which highlights educational innovations in Physics.    Sam found it really difficult to communicate to instructors just what was critical about any instructional innovation, since most research just published the effectiveness of the innovation, but lacked information about secondary implementations or adaptations.

In their article, “Fidelity of Implementation: An Overlooked Yet Critical Construct to Establish Effectiveness of Evidence-Based Instructional Practices” (CBE-Life Sciences Education, 16, Spring 2017), Stains and Vickrey helpfully point out that we are not the only discipline which has struggled with this question, and we can turn to other (better funded!) fields (like medicine) to give ourselves a framework for addressing fidelity of implementation.

First, some useful terminology:  Efficacy studies are the kind that we usually publish in discipline-based research, showing that a particular strategy (like Peer Instruction) is effective in helping students learn.  Effectiveness studies, where the strategy is used in the real world (in vivo as it were) are what our field is typically lacking.  In particular, while we know that many of our instructional techniques work, we don’t know WHY.  Many instructional developers haven’t figured out the critical components of their instructional innovation – the elements which are necessary for its’ effects, which an instructor should be sure not to drop.  But of course, this makes it very difficult to inform instructors about how they can best adapt such strategies to their instructional context.  As Stains and Vickrey argue:

It is therefore essential to measure features of Evidence Based Instructional Strategies’ (EBIPS) implementations within the control and treatment instructional environments to derive valid claims about the effectiveness of the EBIPs. These empirical investigations would enable the identification of the features of EBIPs that are critical to positive student outcomes and would inform instructors on appropriate adaptations.

Thus, measuring fidelity of implementation (FOI) is really important – particularly measuring the critical components of that program (see Century et al., 2010 for discussion), and indeed, the NSF is now calling for such studies.  What are the structural components, and the process required for an intervention to be successful?   Measuring such critical components enhances the validity of a study, ensuring that the results observed (such as student learning) are truly related to the intervention (e.g., the educational innovation).

Thus, as defined by Stains and Vickrey:

Fidelity of implementation represents the extent to which the critical components of an intended educational program, curriculum, or instructional practice are present when that program, curriculum, or practice is enacted.

Figure 1 (Stains and Vickrey, 2017) below gives a useful organizational chart of critical components.

The rest of their paper goes on to identify methods to characterize fidelity of implementation, including conducting studies of effectiveness, interviewing the developers,  observing classes, and interviewing instructors and students – in diverse instances of implementation.  This research can result in identifying different adaptations of each critical component, and hypothesizing whether those adaptations are potentially productive, unproductive, or neutral.  One can measure the level of fidelity of implementation by calculating the proportion of critical components used, but this isn’t that satisfactory (and I have done this), since it whitewashes the importance of each component and how they combine.  Stains and Vickrey recommend using instead “implementation type,” where different combinations of critical components are described with respect to student learning.  They then give examples of how this worked in their analysis of Peer Instruction.

This is a very useful, practical paper based on sound research, and should be valuable for the DBER community as we further refine our methods of studying educational implementation.

 

Resources:

Stains, M. and Vickrey, T., Fidelity of Implementation: An Overlooked Yet Critical Construct to Establish Effectiveness of Evidence-Based Instructional Practices, (CBE-Life Sciences Education, 16, Spring 2017)

Century J, Rudnick M, Freeman C (2010). A framework for measuring fidelity of implementation: a foundation for shared language and accumulation of knowledge. Am J Eval 31, 199–218.

 

Leave a Comment

Previous post:

Next post: