Working through tasks in collaboration with others can often lighten the workload, improve outcomes, and increase enjoyment in the work. This is true of as well!
Evaluation can be improved by doing it in , planning and implementing with others who share one or more of a variety of components. For example, various organizations or evaluators might share aspects of their mission statements, leading to shared goals or outcomes that they wish to achieve. In addition, they may share geographies, audiences, networks, , funders, or other aspects of their work.
However, many of the existing resources depicting how to do evaluation infer that the process is typically done by one or two people in a single organization. To address a lack of information on evaluating collaboratively, this page aims to describe the advantages, and potential pathway to do evaluation in community with other people and organizations - or networked evaluation.
Even better, when the processes, design parameters, lessons learned, and other components are shared beyond the network for others in the field who are interested, a concept termed is created. Both networked and collective evaluation are great tools to improve environmental education outcomes!
Networked evaluation is defined as a process where multiple organizations collaborate on evaluation processes, thereby forming a network with shared goals or outcomes where the same or similar tools, approaches, and/or research design are used to evaluate those outcomes. Collective evaluation extends networked evaluation, adding that the networked organizations share those components beyond the network for learning and improvement in the field at large.
Many benefits can result from both forms of evaluation. First, evaluation designed, completed, and shared with others provides support and learning for all who participate (meaning within the network). Second, if components are shared beyond the network, others also benefit and learn, leading to improved evaluation beyond the initial networked group. Third, the stories and lessons learned can be aggregated in order to tell a broader story about impacts, including impacts on programs, on the organizations themselves, and on the field of environmental education.
Below, we profile some organizations in EE who have taken and are taking a variety of paths toward networked and collective evaluation, and we also provide examples and suggestions for how to take networked evaluation activities and extend them to collective evaluation.
Pathway 1: Starting by developing the network
Sometimes the obvious way to aim towards networked evaluation is to start a conversation with a network of interested organizations.
Example: The Alliance for Watershed Education in the Delaware River Watershed
[text to come!]
Example: Colorado Collective Outcomes
[text to come!]
Pathway 2: Starting by using the same tool or approach
Sometimes a different approach is effective, which is to first recruit organizations with the same goals and outcomes and that may be using or are interested in using the same evaluation tool(s) or approach(es).
Example: Improving EE distance learning
In the midst of the pandemic in 2020, faculty at Virginia Tech and Clemson recruited 43 EE youth-serving organizations, who were all unexpectedly pivoting from in-person to online programming, to create a networked community with a culture of continuous improvement and learning. The organizations had various levels of prior collaboration with each other from none to a significant amount. All shared similar desired outcomes resulting from their programs and agreed to use a consistent outcome measure (a survey that assesses a broad range of EE outcomes: Environmental Education for the 21st Century, EE21) to evaluate programs. This allowed these organizations to draw conclusions about their programs, identify potential areas for improvement, and share ideas about what works best both within and across organizations.
Learning cycles: The program included two learning cycles, where all 43 organizations interacted. In the first cycle, educators were coached and guided through evaluation and data collection processes. After collecting data on their programs they received an evaluation report that summarized their results. The educators were then asked to reflect on their programs and identify areas for improvement based on the evaluation results. The second cycle was an opportunity to evaluate programs after these adjustments were implemented and to reflect as a community on what was learned about these adjustments to foster collective learning. This model supports educator autonomy and decision-making, and takes advantage of the broader community of peers to develop new insights and innovations.
Other activities: The facilitators used a number of techniques to foster evidence-based learning across the network.
- In network meetings, participants were provided with the to choose breakout rooms in order to network with specific organizations/people with commonalities.
- Examples and facilitated discussions were provided about programs that attained high outcomes.
- Specific topics for spring meetings were chosen based on member requests and interests (e.g. attracting participation in new online programs) and facilitated presentations and small group discussions on those topics.
- One-on-one consultations with each organization were added to interpret data for spring reports.
Extending to collective evaluation:
The EE21 team has a number of initiatives that extend their evaluative learning to the field of environmental education.
- Results from these networks were widely shared to inform the field about promising practices.
- Results about the network and what worked and what could be improved will be shared widely so that other organizations and evaluation specialists can replicate.
- Survey instrument (EE21) provided here in its entirety.
Pathway 3: Hybrid paths
And of course, networked evaluation is never quite as simple as the first two paths indicate; combinations of paths are common! For example, some collaboration might pre-exist across a network, and subsequent use of a common tool simultaneously accomplishes networked evaluation and increased connection and collaborative activity across the network.
Example: Oregon Outdoor Schools
Oregon Outdoor School (ODS) is located in the United States, and is a Statewide outdoor formal educational program, primarily for 5th and 6th grade students. Outdoor School is a hands-on week of experiential science education in the field, with a more than 60-year history. Originally, the reach of the program was not consistent throughout the State, but nonetheless, the originally active sites have a long history of familiarity and collaboration, so the network and sense of community already existed to a large degree. At this time, some networked evaluation existed in the program, including a formal summative evaluation, a style evaluation, and an overall comprehensive analysis of any evaluation activities over a four-year period. These activities appear to have been primarily conducted to help programs clarify their goals, and seldom to have resulted in evidence-based programming updates.
New funding from the State legislature allowed expansion Statewide in the 2017-2018 academic year, and by the end of the 2018-2019 school year 84% of eligible 5th or 6th grade students had attended. With this expansion, evaluation needed to be established that would use a common measurement system, and that would enable programs to track their outcomes, looking at change over time within their individual programs, among various programs, and overall statewide trends. Evaluation started with a small project strategically funded, adapting and using the EE21 outcome measure (a survey that assesses a broad range of EE outcomes: Environmental Education for the 21st Century, EE21). Upon completion of the pilot study, the evaluative process/system was reconsidered, making efforts to center cultural responsiveness and to increase stakeholder participation. Methods and thinking were updated to be more culturally responsive, which included modifying our and administration scripts, focusing our data analysis and engaging in , and training . The evaluation was implemented statewide at 39 outdoor schools in 2020 (Braun 2020).
All outdoor school programs receive private and anonymized final reports with data for their specific program in relation to the statewide sample. Subsequently, outdoor school providers are encouraged to attend an evidence-based workshop where they are supported in understanding evaluation results for their specific program, for all ODS programs statewide, and in the context of the field of EE. Each program then translates these learnings into discrete action steps to improve their respective programs, typically related to culturally responsive instruction and pedagogy.
One result of the evaluation activities was that analysis of findings at the statewide revealed assets and inequities related to and ability, gender and racial . These findings strongly informed the nature of professional development opportunities stakeholders engaged with, or provided. Equity specific analyses of findings from the outcome-based evaluation led to expansion of evaluation in two particular areas. Process level evaluation tools and trainings were developed, and targeted inquiry/trainings were created - both of which centered on equity.
Extending to collective evaluation
The Oregon Outdoor School has a number of initiatives that extend their evaluative learning to the field of environmental education.
- Suite of self-evaluation tools and learning resources available free online
- Instructional Resources Self-evaluation Tool (Backe, Braun and O’Neil, 2021) and learning resources
- Culturally Responsive Self-Evaluation Tool (Brooks, Braun, Backe and Jones, 2020) and learning resources
- Special Education and Self-Evaluation Tool (Brooks, Braun, Backe, Arbuckle and Jones, 2021) and learning resources
- Two faculty at the University of Oregon created a free online class on Critical Orientations: Indigenous Perspectives in Outdoor Education