By Kavita Mittapalli, Ph.D. MN Associates, Inc. Each year millions of dollars are spent in funding initiatives with the sole purpose to make a difference in the lives of people’”children, adults, and families. Program staff spend numerous hours planning the delivery of the initiative to ensure that it accomplishes what it set out to do’”meet its goals and fulfill its objectives. Meanwhile, thoughts often lurk somewhere in the back of their minds: Do we REALLY need to evaluate this program? What purpose/s will the evaluation (findings) serve? How are we going to do that? What are we evaluating and How? Who will benefit from the evaluation and the big question- What will it cost us?
All are valid questions, and here are some valid responses. But first, let’s set a definition of evaluation.
Defining Evaluation. To Evaluate means: to judge the value or condition of (someone or something) in a careful and thoughtful way (Merriam-Webster). As a client once commented: Evaluation helps ‘us’ in being honest about our program and in keeping our promises to our stakeholders and funders.
It is important that program staff understand the process of evaluation, including formative (in progress/process) and summative (at the end) evaluation and how they align with the overall program goals and objectives. This step also ensures that the program staff know when and how data requests will be made by the evaluator, what is being measured, and how and what they can ultimately expect (out of the evaluation). Evaluation, after all, is a collaborative process.
The Breakdown. Evaluation comprises the WHAT, WHEN, HOW, and SO WHAT aspects of measuring a program. The what is the component that needs to be measured/evaluated (e.g., technology implementation, professional development, etc), how will it be measured (e.g., tools such as surveys, focus groups, etc), the so what is the evidence or impact of the activities on the end-user/s’”Students, teachers, or families as the case may be.
Your Turn. Think of a program or an initiative that may need to be evaluated. Now, try to come up with the final results (a.k.a outcomes) that the program intends to accomplish, then, devise the performance indicators (a.k.a. outputs) that will be generated before the final results. Finally, come up with inputs (activities- resources) that will take place during the project.
Timing. You must be thinking, but when should the evaluation plan really be included? Ideally, at the time of program planning within your school or when your school might be considering applying for a grant. However, it is not too late to get strategic about program evaluation. Regardless of how far you have traveled down the road toward implementation, for evaluation purposes, it is essential to identify and document the program outcomes, outputs, and activities.
Time is of essence and, as a principal, you have a million and ten things on your plate! So, let’s say, you may want your school improvement or the data team to determine the program goals and objectives, activities and the outputs, and outcomes. Evaluators often chime in (when they are brought on board from the get go) or when they are hired upon being awarded a contract to help in defining or refining the outputs, outcomes and respective measures for each goal and objectives. As evaluators, one of our functions is also to make sure that goals and objectives are measurable!
Personal Example. In 2009-11, I evaluated a technology-related professional development within a classroom program: How to use Web 2.0 tools to do Project-based Learning with elementary school students- in 10 elementary schools at a Maryland school district.
Inputs. Three weeks of summer PD session for each teacher, course materials, on-site technology/assistance, iPads, demonstrations (by technology staff) of Web 2.0 tools, and an end of session mini-project by each teacher.
Outputs. Lower apprehension in using technology, curiosity about Web 2.0 tools, and interest to use technology in classroom projects.
Outcomes. Increased use of technology in classroom, increased number of student projects using Web 2.0 tools, increase in 21st Century Learning Skills (for students), and more PBL in classroom with student-driven projects.
Measuring Outcomes. We sent out online pre-and-post surveys to all teachers that measured their pre technology proficiency, knowledge of technological tools and Web 2.0 tools in specific, and expectations of the PD. Post survey, we posed similar questions in addition to lessons learned and future applications of Web 2.0 tools in the classroom.
In addition, we sent out a survey to the technology trainers to understand their perspectives of the program and how best they were able to train the teachers. Statistical analysis to measure ‘change’ in habits/perceptions, usage of, and proficiency in technology and Web 2.0 were conducted and reported.
I hope that this blog has provided some insights for you. Here’s to a successful program evaluation at your school!
Kavita Mittapalli, Ph.D. is the CEO of MN Associates, Inc. MNA is an education research and program evaluation company based in Fairfax, VA. We conduct evaluations for state and local education agencies, for-profit and not-for-profit organizations, and higher education institutions in the tri-state area and beyond.