What gets in the way of useful evaluation?

by Stephanie Chasteen on November 29, 2017

I have been thinking a lot lately about how to make my work as an evaluator more *useful*.  For those of you unfamiliar, external evaluation is a broadly defined role, intended to give some sort of independent review of a project’s progress and merit.  Evaluation can be a super important part of a project, helping it to really reach for the stars. Or it can be a series of bland exercises intended to be able to check the box, “We did an external evaluation.”

 

I’m an education researcher turned evaluator.  This is my main job now.  It’s important to me that my work be valuable.  I have chosen to spend my time on this Earth contributing to science education, which I see as key to the betterment of the human condition and our world.  I have worked in course transformation, supported faculty undertaking educational change, and now I have shifted to providing evaluation for programs which are pushing such change.  I still see myself as an agent of change, but am trying to support all the other good ideas and good work out there, rather than needing the to have the power and prestige of running my own programs.  But I still want my time to go to good use.  So it’s incredibly frustrating when I spend a lot of time and intellectual energy considering a project’s strengths and areas for improvement, create a detailed report, and it sits on a shelf.  So, rather than blaming my clients, I am trying to see where I can do better.

So, I am trying intensely to become a better evaluator, to use methods and processes which make it easy to use my work and recommendations — things like good data visualizations, participatory techniques, and strong frameworks for evaluation.

(Side note — what’s the difference between evaluation and research?  The rule of thumb is that evaluation is intended to improve the program; whereas research is intended to contribute to generalized knowledge — though of course the line is sometimes blurred.)

As an evaluator, you’re in a bit of a funny spot.  You’re hired to do an independent review.  But you’re hired and supervised by the people you’re supposed to be reviewing.  There aren’t a lot of guidelines to help you navigate this relationship.  So you’re never really fully external, and you do have some skin in the game — not just because you’re being paid from the project, but because you usually have some interest in the project at hand (or else you wouldn’t have agreed to do so). So, it’s not usually that helpful to think of yourself as the great Accountability Maven, holding stakeholders’ feet to the fire.  How can you be a useful voice to the team?  For myself, I feel that after about 7 years as an evaluator, I have naturally developed a sort of evaluative thinking lens.  What I offer to a project is often the ability to look critically at goals, figure out what success would look like on those goals, and offer some assessment of those goals.  I’m not a great methodologist (don’t send me your network analysis), but I am probably a good systems and assessment thinker.   I’m the one coming in saying “What are your goals?  How would you know if you were successful?  Does this data suggest success?”

I recently took a great workshop from Kylie Hutchinson about making evaluation useful and resonated with many of her messages.  What gets in the way of helping an evaluation become useful?  There is lack of time to incorporate recommendations and data, people suffer information overload when sifting through all your data, and there is often an agenda of compliance (rather than change) in an organization.  I’m lucky to work in education, where “compliance” isn’t such an issue.   But such challenges can be mitigated by having a long-term relationship with an organization, engaging your stakeholders and the PI in the evaluation, and having a local champion for evaluation.  I’d say that last one has been particularly critical for me — when I have a collaborator on the team who really cares about the evaluation, that’s been the best.

Some ideas on making your evaluation more useful:

  1.  Establish the role of the evaluation from the start.  What are the PI’s needs?  Give them one or two questions to reflect on before meeting with you (such as “what’s one question you want answered?”).  Don’t assume this conversation is done once you have it once.  Ask the PI, “How do you anticipate using these results?  What kinds of decisions are you going to make from this evaluation?”
  2. Create a steering committee for the evaluation.  I love this idea!  Have a small group of primary stakeholders to give input and process evaluation results.  This steering committee can also be responsible for creating an action plan on how they will use the evaluation results.
  3. Use data parties.  Use some handouts and visuals (e.g., data placemats, gallery walk) to share your preliminary data with stakeholders, and figure out what the data means, and its’ implications.  This will help you create good recommendations, and reduce information overload.  See my previous post about participatory techniques for other ideas.
  4. Strip the data down.  While there’s a place for the long report, provide “layers” of reporting, including detailed data analysis but also some one or two page graphical summaries which highlight the “pearls” of the evaluation.  See my previous post on visualization techniques and infographics.
  5. Make actionable recommendations.  Use specific recommendations, categorized in a helpful way, with enough detail that someone new could pick it up and run with it.  We workshopped our recommendations. And it’s hard to be this specific unless you do a data party first!
    1. My first try:  Support participants’ autonomy by allowing them to create a lecture or syllabus in the workshop.
    2. The revision:  To support participants’ autonomy and improve sustained use of instructional techniques, include a session on syllabus planning run by an expert in course design, with a 15-minute application period each day, for the June 2018 conference.

And offer those recommendations when they are needed!  One last cartoon on how the typical annual evaluation cycle isn’t really that useful to programs.

Cartoons from FreshSpectrum.com which offers great cartoons & data viz consulting.

Leave a Comment

Previous post: