Increasing evaluation adoption

2012 EEN Forum

Led by Madeleine Bottrill and Matt Keene

Case Study: From the conservation planning field, and ngo’s in general have unique planning styles. Can we think about the environmental sector as a system where we can find specific points to interject our energy to help increase evaluation adoption and use. People need to want evaluations for it to work. Are there any theories (systems theory etc) that can help explain or reduce barriers to evaluation uptake. Talking about evaluation in its many forms. How much investment and what rigor is required to call it an evaluation and to make it successful.

Barriers and Leverage: Is this an issue? Evaluation is a lot of time, hard, overwhelming to those that are not doing it already. Having a shepherd who knows how to do it is important, education is important. Pooling resources would help, “leverage pools.” Keep it simple!

In the future there will be information about what other people have done that novices can access. It is easier to use this data for benchmarks and these comparisons (between agencies) are more meaningful. Doing an in-depth evaluation might mean that division may be punished (if there are negative results), as opposed to those who don’t do an evaluation. This can be eliminated if it is required for everyone to do evaluations.

Comparability and standards is important to compare, this helps with cost benefit analyses etc. You need a common currency to compare a program. Standardize methodologies to help comparability. Funders have trouble with long time-frames, you must have short AND long term goals and measurements.

Context, culture, and communication are huge barriers. Methods and rigor over use, evaluations must be timely and pertinent or they are useless. Capacity problems in the environmental sector for evaluation. Evaluations are not just applied research.
Three distinct world views: Conservationists (human system is a threat to the natural system so they ignore them), conservation users(Human and natural spheres studied), resource users. I nstitutional culture, perception of users and presentation of evaluators. Critical thinking is intertwined with evaluation, not always institutionalized.

WHAT IS EVALUATION? There are many types.

Evaluation appreciation and apprehension. Apprehensions include, I lose control, expensive, too long, done “to me”, I have to use the results.

There is a lot of intuitive management instead of systematic. Organizations need to be build on metrics, that may help with the institutional shift. Education! Demonstrate the value. Who are the users? Talk to everyone who will consume the information about the evaluation.

Organizing framework/principals: Prioritize because you can not deal with everything. Categories: conceptual (definitions, what evaluations mean), motivations, methodological, logistical, and context (institutional culture, science culture).

We need to find places to leverage change. Leverage Points: Numbers (weakest place to intervene, spend more time and money) Buffers(lake vs river, something with a bigger stock) stock and flow structures (you can’t drive where there is no road, hard to change). Higher: delays in the system (delays in adoption), balancing feedback loops, information flows(transparency, increase access to information), rules, self organization (EEN!, evolution), goals, paradigms (things that we know that we dont have to say) (THESE WERE TAKEN FROM A PREZI PRESENTATION).

A professor at Stanford Societal Networks and incentives across a system. Boundary objects and boundary actors. Bill Clarke Harvard: Why are certain science reports used? similar to our questions here. Salience, Legitimacy, and Credibility. Don’t disregard use.

Packard foundation science program website: Kai Lee.

Sign up sheet for continued participation in the discussion. Leverage points vs Barriers listed on the board. What can we do about it?

Post a Comment