2007 EEN Forum
U.S. Environmental Protection Agency, Washington, D.C. June 14-15, 2007. Over 120 individuals attended the 2007 forum, representing diverse organizations such as the National Academy of Public Administration, the White House Office of Management and Budget, and the University of Wales, Bangor.
The 2007 theme is “Crossing the Threshold: Addressing Methodological, Institutional and Cultural Challenges in Environmental Evaluation in an Era of Performance Management.”
Featured Evaluation Speakers:
William Trochim, Cornell University & President-Elect of the American Evaluation Association
Kathryn Newcomer, George Washington University
Presentation strands included:
1. Moving towards evidence-based decision-making
2. Closing the gaps for the field of environmental evaluation
Bios of the 2007 forum presenters and facilitators.
This is the EEN 2007 Forum – Detailed Agenda for the forum held June 2007 in Washington, D.C.
Memo on Measurement for Environmental Managers: Recommendation and Reference Manual
V. Neimanis, Evaluation Manager, Environment Canada;
Title: Some Thoughts on Capacity Building. V introduced a draft logic model of the Environmental Evaluator’s Network and identified opportunities for the Network to support capacity development of the field of environmental evaluation.
06/15/2007 – Communicating the Evidence of Stewardship Activities
Presenter: Annelise Carleton-Hug
Title: Evaluating Environmental Education: Challenges and Opportunities. Annelise addressed the challenges facing environmental program evaluators, including the complexities involved in measuring affective, cognitive and behavioral change. Current practice in environmental education evaluation involves developing a theory of change with stakeholders, and utilizing a wide variety of data evidence to inform decision making.
Presenter: Per Mickwitz
Description: Evaluations for Evidence-Based Environmental Programs and Policies – from a European (and Finnish) perspective. In Europe evidence-based policies have spread from the health sector, through social services and education to other policy areas. The perception of evidence has largely been based on the “hierarchy of evidence” in medicine that is very different from the evidence concepts of e.g. law, natural sciences or 4 humanities. A fixed “hierarchy of knowledge” approach to evidence is too narrow. Instead, the relevance of different types of evidence depends on the practical situations where it is intended to be used.
06/15/2007 – Evidence and Attribution
Presenter: Andy Rowe
Description: Andy discussed evidence and attribution and his recent review of evaluation methodologies. In most of the evaluations he has reviewed, experimental designs beyond good comparison groups were not feasible for technical, political and ethical reasons. He advocated the importance of standards of evidence for evaluation, which can be found on the American Evaluation Association website. Andy concluded that “evidence is not simple” and emphasized the importance of focusing on evidence standards and requirements, not methods.
06/15/2007 – Introduction To The Work Of The CEBC
Presenter: Andrew Pullin
Description: Andrew’s presentation focused on the work of the Center for Evidence Based Conservation (CEBC). CEBC’s goal is to support decision making in conservation and environmental management through the production and dissemination of systematic reviews on the effectiveness of management and policy interventions.
Presenter: Michael Mason
Description: The purpose of Michael’s presentation was to provide an overview of some of the challenges EPA’s Office of Water has faced in gathering and submitting evidence for 21 PART reviews over the past 5 years. Michael highlighted some of the key opportunities offered by the PART reviews in improving the documentation of implementation activities for future evaluation efforts.
06/15/2007 – PART Evidence: An OMB Perspective
Presenter: Brian Kleinman
Description: Brian described the ideal PART evidence from an OMB Examiner’s perspective.
06/15/2007 – Planning for Third Party Evaluation
Presenters: Laura Pyzik & Neal Feeken
Description: Planning for Third Party Evaluation
Presenter: Guy Robertson
Description: Guy used a real-world example of a high profile and contentious forest planning exercise to explore the relationship between ex ante deterministic estimation techniques and actual results. He concluded with a discussion of the challenges facing adaptive management as an alternative model.
06/15/2007 – RARE: Inspiring Conservation
Presenter: Brett Jenks
Description: Brett shared his experience as CEO of Rare developing, rolling out, and constantly tweaking systems for designing and evaluating efforts to build global constituencies for conservation.
06/15/2007 – What is Evidence?
Presenter: Paul J. Ferraro
Description: Paul focused on the question “What is Evidence?”, and defined evidence as something that changes the probability that a proposition is true. He discussed what good evidence is and whether anyone in environmental policy has the incentive to generate high quality evidence. Paul also spurred the audience to consider the difference between a conceptual and a practical gold standard of evaluation.
Description: A brief summary of participant registration survey findings and revisiting opinions voiced by last year’s participants. Key themes that emerged from analysis of survey results were: 1) a diverse group of participants, 2) variations in participants’ connection to evaluation, 3) common issues of concern, 4) and common priorities for the future of environmental evaluation.
06/14/2007 – Bridging the Gaps Among Stakeholder Perspectives
Presenter: Jennifer Nash
Description: The premise of Jennifer’s talk was that stakeholders face different goals, needs, and challenges with respect to environmental program evaluation, which lead to different implications for practice. Using EPA’s flagship voluntary program (the National Environmental Performance Track) as an example, she explained that stakeholders share some perspectives on evaluation and suggested ways to bridge gaps in understanding between stakeholders.
06/14/2007 – Some Thoughts on Capacity Building
Description: Some Thoughts on Capacity Building. V will introduce a draft logic model of the Environmental Evaluator’s Network and identify opportunities for the Network to support capacity development of the field of environmental evaluation.
06/14/2007 – The Dodge Assessment Initiative
Presenter: Michelle Knapik
Description: In her talk, Michelle introduced the central principles and concepts of the Dodge Foundation Assessment Initiative, especially the concept that Dodge is committed to developing a culture of assessment aimed at improving performance, not merely auditing it. Michelle discussed the importance of measuring what matters, planning backwards, the use of rubrics in program and project design, and the evolution of a culture of assessment.
Presenter: Gail Achterman
Description: The Role of the University in Building Capacity for Environmental Evaluation: Needs and Opportunities. Gail addressed the need for environmental evaluation capacity from a policy maker’s perspective and reflected on how universities can build the capacity to meet the need. Gail used examples from watershed restoration rams.
Presenter: Kathryn E. Newcomer, Ph.D.
Description: Kathryn opened with a discussion of the current environment for program evaluation and performance measurement in government and in the nonprofit sector and identified some unintended consequences of programmatic evaluation and measurement. She also discussed current drivers of evaluation, how the political environment affects drivers, the substantial resources required for outcome or impact evaluations and the problem of accountability demands trumping programmatic learning.
Presenter: John Seidensticker
Description: Point-Counterpoint: Practitioner Adaptive Management vs. External Scientific Assessment – A No-Holds Barred Debate
06/14/2007 – SEEER Fact Sheet
This is EPA’s most current fact sheet on the SEEER effort: Systematic Evaluation of Environmental and Economic Results. SEEER’s goal is to quantify the results of using environmental conflict resolution (ECR). The SEEER project is the first known systematic effort to compare the environmental and economic results of ECR to its alternatives. The findings of SEEER may assist public decision makers and other stakeholders in determining how to address important environmental and natural resource issues and whether ECR may be appropriate in a given situation.
06/14/2007 – Monitoring for Conservation Planning and Management
Presenter: Elizabeth Kennedy
Description: Elizabeth described key information needs for decisions in conservation planning and management. She addressed the questions “What are the barriers to generating these data?” and “What has been Conservation International’s recent strategy to address these constraints?”
Presenter: William Hall
Description: Multi-Agency Environmental Conflict Resolution Evaluation Study. The U.S. Institute for Environmental Conflict Resolution has been working with other federal agencies and state ECR programs to develop a conceptual model of ECR processes and common survey instruments to evaluate ECR processes. A dataset of 52 cases and 525 respondents has been assembled to test the conceptual model and to better understand the key ingredients for ECR success.
06/14/2007 – 2007 Environmental Evaluators Forum Keynote
Presenter: Steve Williams
Description: Steve Williams spoke about some of the issues that he believes need attention to improve the use of evaluation science in an effort to improve the practice of conservation. He gave real-world examples of the “good, the bad, and the ugly” of evaluation and recommended that evaluation science consider both the science and art of conservation. In his concluding comments, he encouraged organizations to fully embrace evaluation in their strategy and operations and look to evaluation as a way to help policy drive budgets.
Presenter: Michael Jacobson
Description: As Performance Management Director in the Executive’s Office of King County, WA, Michael provided a unique perspective at the Forum. He compared and contrasted local and federal capacity for evaluation and then discussed in detail King County’s approach to integrating evaluation into their performance management system. He gave examples of completed outcome evaluations of King County programs and reviewed cultural, methodological, and bureaucratic challenges that must be addressed in order to improve evaluations.
This section will serve as a clearing-house of documents and information related to the network.