Outcome
Citation
Szczytko, R., Stevenson, K., Peterson, M. N., Nietfeld, J., & Strnad, R. L. (2019). Development and validation of the environmental literacy instrument for adolescents. Environmental Education Research, 25(2), 193–210. https://doi.org/10.1080/13504622.2018.1487035
File
Background
This tool was developed as a way to efficiently measure four components of environmental literacy among adolescents so that it may be easily used by practitioners. It was developed by researchers Rachel Szczytko, Kathryn Stevenson, M. Nils Peterson, John Nietfeld, and Renee Strnad from North Carolina State University. The ELI-A was initially tested with a small group of high school students, and then further tested with students in semester-long agricultural courses in North Carolina.
Format
There are four sections testing four components of environmental literacy. Part one measures ecological knowledge with 10 questions; part two measures environmental hope with 11 questions with scaled responses; part three measures cognitive skills with a short passage and five questions about it; part four measures behavior with five questions with scaled responses.
Audience
Adolescents, 13-18 years old
When and how to use the tool
To measure attitude change due to programming, this survey should be implemented on the first day and on the last day of the program. It was originally tested with a semester-long program, and is recommended by authors to be used with a longer-term, high intensity program (e.g., a daily class that meets over a semester, a weekly class over a year).
To use this survey, the program you are implementing must include activities that intend to impact environmental literacy. Activities (or the sum of activities in a program) should impact environmental knowledge, attitudes, behaviors and cognitive skills, such as decision making and critical thinking, but these do not need to explicitly be covered to be impacted.
How to analyze
To analyze this survey, first determine each participant’s environmental literacy score. Each of the four parts of the survey have an individual score, and the sum of the four scores is their Environmental Literacy Score. This process will take some time at first, though a spreadsheet utilizing equations can speed the process. The “Direction for Use and Scoring Guide” linked PDF on this page provides direction calculating each Environmental Literacy Score.
After determining individual Environmental Literacy Scores, an average Environmental Literacy Score can be determined for participant groups. To do so, sum up all individual scores and divide by the total number of participants for an average group score.
When administering pre-experience surveys and post-experience surveys, you can conduct higher-level statistics on your data to understand if participants had significant changes in the outcome areas after their participation in the program. Ideally, analysis of these data would use a statistical paired t-test, so that outliers would be recognized and significance of program impact can be confirmed before making program changes. Other, more descriptive techniques, such as a simple bar chart, could be useful for discussion, but should not be used for major program changes if significance of the change has not been calculated.
What to do next
Once you’ve administered your survey and analyzed the data, consider the following suggestions about what to do next:
- What do the data tell you? Is there a significant difference between the pre- and post-scores? If the scores improve, you can say your program is enhancing students’ environmental literacy scores.
- You could compare populations to determine if members have different environmental literacy than others. This could also provide justification for program development, marketing, or funding proposals.
- If your state has an Environmental Literacy Plan, consider how these data can be used to support the Plan.
- Invite program staff or other partners to look over the data. Together you might also consider:
- What do these results tell us about our programming? Why do we think we got these results?
- What did we think we would see with respect to ecological knowledge/ hope/behavior/cognitive skills? And did these data support our goals?
- If our results did not support our goals, can we brainstorm on areas within the programming or delivery to influence ecological knowledge/hope/behavior/cognitive skills? What changes should be made to programming, or how should new programs be designed?
- Who in our community should we reach out to for collaboratively discussing program design?
- Who or what organizations can we share our learning with?
How to see if this tool would work with your program
To assess whether the tool is appropriate for your audience, please review the items carefully and pilot test the tool with a small group that represents your population. It may be more helpful to pilot test one section of the tool at a time (e.g., ecological knowledge, hope). To pilot test, ask a small group of willing participants who are part of your target audience to talk to you as they complete the tool. What are they thinking when they read each item? What experiences come to mind when they respond? As long as this is what you expect and you will gain relevant information from your evaluation, you are on the right track! If the answers are different for each person--particularly in a given section-- and they should be more similar given their experiences, you may need to look at other tools. Note that the behavior section is likely to vary widely; this variance should not deter you from using the tool.
Tool Tips
- You may edit the questions in this tool to make them relevant and place-based for students where appropriate. View the “Direction for Use and Scoring Guide” linked PDF on this page for more guidance.
- Allow ample program time to complete this instrument. It should take about 5 to 20 minutes to complete. Younger audiences may take longer on the instrument.