What gets in the way of useful evaluation?

by Stephanie Chasteen on November 29, 2017

I have been thinking a lot lately about how to make my work as an evaluator more *useful*.  For those of you unfamiliar, external evaluation is a broadly defined role, intended to give some sort of independent review of a project’s progress and merit.  Evaluation can be a super important part of a project, helping it to really reach for the stars. Or it can be a series of bland exercises intended to be able to check the box, “We did an external evaluation.”

 

I’m an education researcher turned evaluator.  This is my main job now.  It’s important to me that my work be valuable.  I have chosen to spend my time on this Earth contributing to science education, which I see as key to the betterment of the human condition and our world.  I have worked in course transformation, supported faculty undertaking educational change, and now I have shifted to providing evaluation for programs which are pushing such change.  I still see myself as an agent of change, but am trying to support all the other good ideas and good work out there, rather than needing the to have the power and prestige of running my own programs.  But I still want my time to go to good use.  So it’s incredibly frustrating when I spend a lot of time and intellectual energy considering a project’s strengths and areas for improvement, create a detailed report, and it sits on a shelf.  So, rather than blaming my clients, I am trying to see where I can do better.

So, I am trying intensely to become a better evaluator, to use methods and processes which make it easy to use my work and recommendations — things like good data visualizations, participatory techniques, and strong frameworks for evaluation.

(Side note — what’s the difference between evaluation and research?  The rule of thumb is that evaluation is intended to improve the program; whereas research is intended to contribute to generalized knowledge — though of course the line is sometimes blurred.)

As an evaluator, you’re in a bit of a funny spot.  You’re hired to do an independent review.  But you’re hired and supervised by the people you’re supposed to be reviewing.  There aren’t a lot of guidelines to help you navigate this relationship.  So you’re never really fully external, and you do have some skin in the game — not just because you’re being paid from the project, but because you usually have some interest in the project at hand (or else you wouldn’t have agreed to do so). So, it’s not usually that helpful to think of yourself as the great Accountability Maven, holding stakeholders’ feet to the fire.  How can you be a useful voice to the team?  For myself, I feel that after about 7 years as an evaluator, I have naturally developed a sort of evaluative thinking lens.  What I offer to a project is often the ability to look critically at goals, figure out what success would look like on those goals, and offer some assessment of those goals.  I’m not a great methodologist (don’t send me your network analysis), but I am probably a good systems and assessment thinker.   I’m the one coming in saying “What are your goals?  How would you know if you were successful?  Does this data suggest success?”

I recently took a great workshop from Kylie Hutchinson about making evaluation useful and resonated with many of her messages.  What gets in the way of helping an evaluation become useful?  There is lack of time to incorporate recommendations and data, people suffer information overload when sifting through all your data, and there is often an agenda of compliance (rather than change) in an organization.  I’m lucky to work in education, where “compliance” isn’t such an issue.   But such challenges can be mitigated by having a long-term relationship with an organization, engaging your stakeholders and the PI in the evaluation, and having a local champion for evaluation.  I’d say that last one has been particularly critical for me — when I have a collaborator on the team who really cares about the evaluation, that’s been the best.

Some ideas on making your evaluation more useful:

  1.  Establish the role of the evaluation from the start.  What are the PI’s needs?  Give them one or two questions to reflect on before meeting with you (such as “what’s one question you want answered?”).  Don’t assume this conversation is done once you have it once.  Ask the PI, “How do you anticipate using these results?  What kinds of decisions are you going to make from this evaluation?”
  2. Create a steering committee for the evaluation.  I love this idea!  Have a small group of primary stakeholders to give input and process evaluation results.  This steering committee can also be responsible for creating an action plan on how they will use the evaluation results.
  3. Use data parties.  Use some handouts and visuals (e.g., data placemats, gallery walk) to share your preliminary data with stakeholders, and figure out what the data means, and its’ implications.  This will help you create good recommendations, and reduce information overload.  See my previous post about participatory techniques for other ideas.
  4. Strip the data down.  While there’s a place for the long report, provide “layers” of reporting, including detailed data analysis but also some one or two page graphical summaries which highlight the “pearls” of the evaluation.  See my previous post on visualization techniques and infographics.
  5. Make actionable recommendations.  Use specific recommendations, categorized in a helpful way, with enough detail that someone new could pick it up and run with it.  We workshopped our recommendations. And it’s hard to be this specific unless you do a data party first!
    1. My first try:  Support participants’ autonomy by allowing them to create a lecture or syllabus in the workshop.
    2. The revision:  To support participants’ autonomy and improve sustained use of instructional techniques, include a session on syllabus planning run by an expert in course design, with a 15-minute application period each day, for the June 2018 conference.

And offer those recommendations when they are needed!  One last cartoon on how the typical annual evaluation cycle isn’t really that useful to programs.

Cartoons from FreshSpectrum.com which offers great cartoons & data viz consulting.

{ 0 comments }

Participatory techniques in measurement from #eval17

by Stephanie Chasteen on November 10, 2017

Another big theme at the conference is how to engage your stakeholders in evaluation. These techniques are also relevant to those in educational reform and institutional change, as these would be great ways to include departments, faculty, students, etc., in data to inform change.

Why engage?  To increase use of your evaluation

Why doesn’t data get used in making decisions?  Often, because the person doing the measurement (e.g., the evaluator) isn’t the same person who needs to make the decisions.  Stakeholders need to be given an opportunity to engage in the evaluation planning, making meaning of the data, and mapping out their systems.  People are more likely to use what they helped create.  Obvious, yes, but I had thought of participatory structures as the solution to helping people learn from the evaluation – but ultimately this learning is really in the service of increasing the use of the evaluation.

Data parties & data galleries

Data parties are a common way to engage stakeholders in messing around with the data – BEFORE you write the report.  I will definitely do this next time – I wrote my report, but had trouble writing specific actionable recommendations based on the data because I was guessing at the best solution.  Testing out solutions with your stakeholders in a data party is a great way to do this.  For some resources on data parties, see Community Solutions website (http://communitysolutions.ca/web/resources-public/) and the great “intentional learning” guide at FSG (https://www.fsg.org/tools-and-resources) — both of which talk about using data placemats, rotating flip charts, data galleries, and other ways to engage people in the data.  My challenge is that many of my activities take place virtually.  One idea that I got at the conference:  I could create Google Docs with the data that I want people to review, put them into breakout groups, and then have the breakout groups rotate through several different Google docs and make comments.  This would be a virtual data gallery.

System mapping

System mapping is making a map of the system at hand.  Who are the actors?  What is the environment?  What are the barriers and enablers to change?  What are the feedback loops?  To the right is an example from ServiceDesignTools.

Why do I list this as a participatory technique?  Because this is something that is most useful to create with the clients or people undertaking change.  The final product is not as important as the process.  This was highlighted by an experienced evaluator (Kyle Hutchinson) who shared the system map she worked hard on for a client, which was utterly confusing to the client.  Anyone who hadn’t created it would have found it hard to use.  I could imagine using this technique with physics departments, professional organizations, many types of people who are trying to work with a system.

A variant of system mapping is a technique pioneered by a great design guy here at the conference (Cameron from Cense) – called attractor mapping.  In attractor mapping, you make a map of the system, but note or overlay information about where most of the action is happening.  This could be geographic, social networks, system models, etc.  Where is energy and action being focused?  Is it in the right place?

Rubrics

Making a rubric can also be a participatory activity.  What would success look like for an institution?  What is unsatisfactory, satisfactory, and exemplary performance for… an executive board?  A site visit?  A rubric can give a useful format for engaging in a collaborative format about what success looks like, being clear about criteria, and working backwards from there.  For more see The Social Profit Handbook which discusses the use of “success rubrics.”

Journey Mapping

I got to do a fun journey mapping activity with Cameron from Cense.  Journey Mapping is a design-thinking activity, to map out the path of a particular person or type of person through your project.  For example, we chose to map the hypothetical path of a person who decides to attend Evaluation 2017 (this conference) to them choosing to take action as a result of what they learn from the conference.  This highlighted several key evaluable questions, such as what types of people are attracted to the conference, how does the word get out, what kinds of communication sets expectations about the conference, how do they choose relevant sessions, how are they supported to take action.  I could imagine using this with a lot of my stakeholders:

  • What is the pathway of a person who engages in the PhysTEC project, taking on new identities around teacher preparation?
  • What is the pathway of a physics major from learning about teaching to licensure at a particular site?
  • How does a department chair choose to enact change in their department and use APS materials to enact that change?

Other ideas

For more great design thinking ideas, see the Design Kit, and Cameron’s blog Censemaking.

{ Comments on this entry are closed }

Data viz resources from #eval17 (update)

November 10, 2017

I’m enjoying my first time at the American Evaluation Association (AEA) conference here in DC, and finally getting around to writing about a few things that I’m learning. Today’s post is about some of the great data visualization and representations that I’ve been picking up. This is all really relevant to my education research friends. […]

Read the full article →

Some helpful tips in project management

November 1, 2017

I’m a member of the American Evaluation Association (eval.org), which is honestly one of the most productive professional society memberships I’ve ever encountered.  They offer many webinars and amazing resources, plus a daily blog, which are exactly what I’ve needed as an evaluator.   I recently blogged about the wonderful (paid) webinar that I took with […]

Read the full article →

Fidelity of Implementation: Measuring how instructional innovations are used

October 25, 2017

I recently came across an illuminating article by the (ever-diverse) Marilyne Stains and her colleague Trisha Vickrey discussing a particularly sticky issue in education research – how do we know if research-tested techniques and curricula are as effective in practice as promised by the original study? Of course, we don’t – If Professor A at […]

Read the full article →

My friend Paul

September 27, 2017

I have both a sad and joyous post today — one that I have been meaning to write for some time, but understandably struggled to do so.  On August 18th, I lost one of my dearest friends and most loving mentor, Paul Doherty. I have thought of Paul every day since finding out  he was […]

Read the full article →

Condensing the visual display of comparisons: Data Dashboards

September 13, 2017

I’ve been learning more about effective data visualization lately, and recently was in a wonderful webinar on Data Dashboards (with Ann Emery — whose blog has great posts about data viz, such as using color, and telling stories with data).  It was a wonderfully information-packed session, and I’d recommend it to anybody!  I have a […]

Read the full article →

Want to consult? Here are some resources for education consultants.

August 9, 2017

I’m pleased to announce the launch of our new Physics Consultants Directory on PhysPort.org.   Here you can list yourself as a consultant, or find consultants to help with a variety of projects.  We are trying to populate the directory intensively by August 17th, so please try to list yourself by then (though the site […]

Read the full article →

My learning goal and clicker workshops all online

August 3, 2017

Giving workshops on the use of clickers / peer instruction, or learning goals?  I wanted to let you all know that my workshop materials for both topics are all compiled and archived online on our SEI Workshop Page.  There are also videos of several of my workshops (though a few years old, they still show […]

Read the full article →

Defining excellence in physics teacher preparation programs: The PTEPA (#AAPTSm17)

July 26, 2017

A big challenge in physics is preparing adequate numbers of well-prepared future physics teachers.  There is a huge dearth of qualified physics teachers at the high school level, and some physics departments have taken it upon themselves to try to address this gap.  Some are very successful.  How do they do it? I’ve been working […]

Read the full article →

Phys21: Preparing students for diverse careers (#AAPTSM17)

July 26, 2017

I just gave an invited talk at AAPT about my work for the Phys21 Report:  Preparing Students for 21st Century Careers.  I was commissioned by the JTUPP committee to create case studies of how institutions achieved success for diverse students.  This was my favorite project last year, it was completely inspiring to talk about what […]

Read the full article →

Improving the bottom quartile with a metacognitive exercise (#AAPTSM17)

July 25, 2017

I’m in an inspiring session by Charles Atwood (University of Utah) about how they improved the performance of at-risk students in introductory chemistry at the University of Utah. Abstract: To improve success rates in large general chemistry sections at the University of Utah, we realized we must improve the bottom two student quartiles performance. We […]

Read the full article →

Mutual mentoring (liveblogging from #AAPTsm17)

July 25, 2017

I’m now attending a session on Mutual Mentoring for physics faculty, presented by Anne Cox (Eckerd College). Abstract: We were part of an NSF ADVANCE grant mutual mentoring project for senior women faculty in chemistry and physics that began in 2007. We have continued our bi-monthly mentoring meetings for the past 10 years (well beyond the […]

Read the full article →

Promoting innovation and entrepreneurship in physics (liveblogging from #AAPTSm17)

July 24, 2017

I’m at the American Association of Physics Teachers meeting this week, and will blog about a few sessions while I’m here. In a talk by Crystal Bailey (American Physical Society), she argued that we need to more explicitly teach Physics Innovation and Entrepreneurship (PIE) to our students.  I find this a really valuable message; having […]

Read the full article →

Educational change: How systemic thinking helps to push social progress

July 5, 2017

In today’s post I want to share some ponderous thoughts about how educational reforms happen, and how systemic thinking helps to support those reforms.  I am fortunate to be a working group leader in the Accelerating Systemic Change Network (ASCN; ascnhighered.org), and one of the working groups focuses on how theories and models of change can […]

Read the full article →

Changing how universities teach science: The SEI Model

June 21, 2017

We know a lot about how to improve STEM teaching and learning at the college level, and yet these improvements have yet to take hold in a widespread manner.  This is the perennial problem which many of us in STEM education are wrestling with.  The study of institutional change is expanding ever more, including lessons […]

Read the full article →

Data visualization tips

June 14, 2017

Are you trying to tell a story with your data?  This is a big part of my job (as an external evaluator), and I recently attended an excellent webinar on data visualization.  Now, I hate webinars that are trying to sell me a book, but this one was so packed full of great ideas that […]

Read the full article →

You can now embed PhET into Powerpoint!

June 2, 2017

If you’re a PhET user, you’ll be interested in this one.  PhET has a new application that allows you to directly embed the simulations into your Powerpoint.  No more switching back and forth between Powerpoint and the simulation, or awkward pauses while you drag the simulation to your projection screen. Just install their free PhET […]

Read the full article →

A great new book! Teaching and Learning STEM by Felder and Brent

June 2, 2017

I was pleased to be invited to write a review for Physics Today on a new book, Teaching and Learning STEM by Felder and Brent.  I loved this book!  I found it utterly charming, useful, kind, and knowledgable.  I highly recommend it, and am going to be purchasing several copies to be able to give to […]

Read the full article →

Outdoor activities for kids: Big Book of Nature Activities

January 15, 2017

If you’re feeling a little stuck indoors with your kids, here is a resource to get your kids (or students) learning from the outdoors even through the colder months.  Last year I picked up a copy of The Big Book of Nature Activities: A Year-Round Guide to Outdoor Learning.  I found it to be a great […]

Read the full article →