Evaluation Process

The eeVAL processes outlined below apply the six core eeVALues to drive excellence in EE program design, build just relationships, contribute to meaningful experiences, and yield equitable outcomes.

There is no “right” starting point. Revisit any of these processes during an eeVAL as you apply the values of deep curiosity and critical reflection. To meet the evaluation goals, the evaluation can focus on specific processes. It is often not realistic to do it all. Many factors influence how you apply the processes. You might begin by asking:

Why are you interested in evaluation at this time?

What evaluation are you doing right now?

Where are you in understanding the program and its impact?

What is your vision for the program?

What resource do you need?

What do you need to know to get there?

, timelines, and partnerships will shape your study. The process can work at any , ranging from individual projects to multisite collaborations. Create, try, learn, and revise your unique eeVAL processes.

A. Build relationships
  1. Evaluation is supported by relationships. Learn about the context, the program, and the people it serves. For example, you might speak with members and read about the history of the program, community, and the people that the program wants to serve. This is a process that takes time and draws on many sources, such as historical records and conversations with stakeholders.
    •  Many tools are available to help you build relationships and understanding. For example, this Toolkit for Engaging People in Conservation provides insight to defining your vision and identifying your audiences. 
  2. Be thoughtful about who leads the program . Ideally, (persons impacted by the program directly and indirectly) are involved in evaluations as team members, designers, decision-makers, and implementers of the evaluation from the very beginning. Someone from the organization might be accountable for the project, yet not the sole decision-maker.
  3. Take stock of available , both human and capital, such as time staff can devote, project partners, funding, technology, materials/supplies, community-based office space, civic capacity, in-kind donations, and other capital.
  4. Determine how to allocate resources to compensate individuals who co-create and participate in the evaluation process. Compensation shows you value their time and contributions and is especially important if you are engaging individuals from marginalized identities and communities in co-creation. Compensation can include monetary and other benefits. Ask individuals what they would value or prefer in exchange for their time.


B. Identify purpose, goals, and design
  1. Identify and understand the purpose for conducting an evaluation from various perspectives, including program participants and partners.
    • Theories of Change and Logic Models are tools you might use to communicate and build consensus for evaluation. A and logic model explicitly expose the assumptions you have for why what you do will lead to the outcomes you want.
  2. Identify the of importance to program participants and partners.
    • Program participants might include persons impacted by the program directly and indirectly (e.g., program providers, program participants, non-participants). Ideally, program participants are involved in evaluations as team members, designers, decision-makers and implementers of the evaluation (rather than as data sources only).
    • Project partners might include program staff, community members at-large, business owners, funders, and policymakers. Partners may have different questions about an EE program. Working closely with project partners at every stage of your process can help meet the needs of your project partner.
    • Through the process of engaging diverse voices, you will likely generate many potential evaluation questions, often more than you have resources to pursue. This means you might need to prioritize your evaluation questions.
  3. Design an evaluation that is appropriate for the program and context. Think about how the evaluation will balance technical, political, and ethical considerations.
    • Technical aspects could include how prior knowledge and experience ( ) are assessed or whether the size of your is appropriate. Avoid
    • Political aspects might include how to negotiate differences in evaluator and program partner perspectives. Fluid logic models are a great tool.
    • Ethics includes how to gather to participate, who owns and will profit from the data that is collected, and who gets credit for the knowledge generated by the study. It is important to not only reflect on these questions, but to explicitly state how communities will have ownership over the data and numbers that impact them.
    • When you design your evaluation plan, remember to include program partners in the process. This builds trust and , while attending to everyone's needs. In addition, this process can help ensure the quality of your data. 
  4. Explicitly ask program participants and partners about the they may face to participation. Barriers vary by community, and it is important not to assume these include time, transportation, cost, or language only. 

Are you wondering how to select evaluation participants?

Initially, you may be tempted to get feedback from everyone who participates in your program. This may be an appropriate strategy, if the program serves a very small audience. With a large group of participants, however, this approach strains limited resources, such as time and money. Instead of collecting information from everyone in your (i.e., all of your participants), you can learn just as much about your program by collecting information from a sample of participants.

Sampling can apply to more than individual participants! Sampling also applies to other types of data such as documents (teacher logs, student exams, etc.), artifacts, photographs, and even environmental quality . You can also draw samples from different units of analysis such as classrooms, schools, or different program sites.


C. Collect and analyze data
  1. Consider (re)purposing data that is already generated (e.g., registrations, applications, attendance records, participant projects) and using embedded assessments (assessments that are directly built into your programming).
    • Do not forget the easy data! As a part of your evaluation work plan, remember some of the "simple" indicators such as counting the participants at events, the bags of litter collected on river clean-up day. It can be easy to forget to collect this information when you focus on the more intensive tasks of developing and administering evaluation instruments. Because simple indicators may provide critical evidence of success and may be impossible to collect later on, do not forget about them!
  2. Choose tools, , or design data collection tools that are appropriate to cultures, histories, and traditions within the local context, as well as the format and setting of the programming (in-person, virtual, outdoors, indoors). Methods might include tools like storytelling, interviews, chronicles, memoirs, observations, and focus groups, as well as participatory visual methods— photovoice, filmmaking, digital storytelling, drawings, painting, fabric art, fashion shows; and tools such as surveys, questionnaires, and polls. Because there are advantages and disadvantages to both quantitative and qualitative data, many evaluations rely on a mix of the two (which can be practiced as a evaluation approach). 

    Some additional resources that might be helpful as you do this work include:

  3. Engage program participants as partners in designing the instrument and data collection procedures. Before you do this, remember to consider the site and location where the data collection will take place and review the process of administering the evaluation instrument or approach thoughtfully and with attention to project partner needs.
  4. Be thoughtful about who collects the data. For example, in a youth program, does it make sense for youth to help collect the data? If you’re conducting a focus group, does it make sense for a BIPOC person to lead that focus group? Does it make sense to have an environmental educator other than the one who taught the session collect feedback on the educational experience? It matters who does the asking. 
  5. If you’re interested in understanding differences across subgroups, you have to find a way to respectfully ask participants how they identify along those dimensions of difference. For example, organizations that want to broaden the cultures and communities with whom they engage may want to know about the demographics of those who engage with their activities, and of those in their communities who are not . In these cases, you will have to think about how to ask culturally relevant demographic questions in an ethical way. You can get help on this topic here.  
  6. Analyze the data (review this document for brief tips for ensuring that data are ready to be analyzed) in a way that includes cultural and contextual factors to increase the accuracy and relevance of interpretations. For example, examine outcome patterns across dimensions of difference (e.g. race, , education, family composition, ability status, etc.). You can’t clearly see what you don’t name or measure. Analyzing data in partnership with program partners and participants increases the accuracy and relevance of interpretations.

    Quantitative data can be analyzed in a variety of different ways including and . Fortunately, there are many software programs to choose from including inexpensive or free ones, such as Statpages.org and Open Stat.

    Qualitative data can be analyzed using a variety of lenses or . Many of these traditions rely on methods of reviewing data, coding it by theme, reviewing and revising the coding, and presenting the themes and support for the themes. This technique does allow for analysis by hand. A sophisticated software program may be more than you actually need or may simply not be cost-effective, although free programs include AnSWR and CDC EZ-Text. Analysis by hand or with a widely available program is most accessible when working with program partners. 


D. Responsibly communicate and share findings
  1. Communicate findings in , written reports, and other forms in partnership with program partners and participants. Use reporting formats that honor cultures, histories, and traditions within the local context (see C2).

    Also, don't forget to plan ahead - reporting may take longer than you think! Don't forget to allow time to collect feedback from program partners and for partner-informed revision.

  2. Share results with a wide range of program and community partners, including program participants and . Internal and external program partners can help act on the findings and provide insights that can be used for use and continuous program improvement.

    Remember, different audiences may need different types of reports. The needs of the community partner are likely different than the needs of a funder. Carefully consider the interests, backgrounds, and expectations of your program partners and participants.


E. Use and continually improve
  1. Use the evaluation results, and strengthened partnerships that come as a result of the evaluation, to enhance and implement programming with relevance.
    • Verify that evaluation results are being used as originally intended and stated to honor the and build trust among evaluation participants.
    • Find ways to make evaluation an ongoing and regular part of your programming.
    • Incorporate evaluation processes and findings into strategic plans to stay responsive and relevant. Contexts and communities that are always changing.
  2. Create opportunities to celebrate, learn about, and learn from culturally responsive, .
    • Creating opportunities to celebrate and learn from evaluation might include events that bring together program or project teams to learn about and reflect on evaluation findings and processes.
    • Opportunities to learn about culturally responsive, equitable processes might include attending workshops that explicitly center equitable evaluation principles, as well as identifying or creating evaluation with partner organizations and individual colleagues.
    • Create opportunities to learn from your evaluation by sharing your process and outcomes within and beyond your organization to contribute to collective evaluation.


This work acknowledges:

  • Contributor Acknowledgement: Charissa Jones, Luciana Ranelli, Spirit Brooks, Jean Kayira, Karyl Askew, Libby McCann, Charlotte Clark, Liz Demattia, Noelle Wyman Roth
  • Critical Review and Comment: Steve Braun, Rachel Szczytko, and Katie Navin 
  • Framework (Hood et al., 2015; Askew et al., 2012)
  • My Environmental Education Evaluation Resource Assistant (MEERA) (Zint, n.d.)

Learn more about the eeVAL values and resources.

Do you want to contact us? Use this form.