Do-It-Yourself Program Evaluation

At the recent National Association of Student Councils' (NASC) annual conference, I was fortunate to present a workshop on how educators can produce their own data to show that student activities lead to positive student outcomes.  I have done a ton of work to show the overwhelming consensus in the scholarly literature that participation in high school and middle school student activities has a positive independent effect on almost every desirable outcome, from standardized test scores to likelihood of graduation from high school to future career earnings.  However, some principals, superintendents, and other community leaders will not be convinced until they see evidence that their kids are improving.  

As I reflected on my talk, I thought that its main points might be helpful to a broad range of educators.  How many times have teachers been forced to abandon cherished programs because administrators cannot see any evidence of effectiveness?  Fortunately, while program evaluation is complicated, two basic concepts are pretty easy to understand and implement, which will allow teachers to generate their own evidence that their method works.

First, before you do an activity, define your desired outcomes.  What do you want the kids to get out of it?  What's the lesson they should learn?  Asking these questions is both good educational practice--before the kids can know the goals, you have to know them--and a necessary step to measure whether the kids did, in fact, learn.  Second, administer a pre test and a post test that are exactly the same.  If we see changes in how the kids answer, we know that differences in question wording are not responsible, so we can reasonably assume that the lesson is responsible for the differences.

For example, let's consider a fourth-grade teacher named Mrs. Jackson who wants to convince her principal that she should be allowed to use a book on self esteem in her classroom.  To prove the book has educational value, she first must define what she hopes the kids will learn.  Mrs. Jackson might hypothesize that children who read the book will feel better about themselves.  To prove it, she could ask each child a multiple-choice question, "How do you feel about yourself" before the students begin to read the book.  Then, the students read the book, and she teaches her lesson.  As a final step, we ask the same question with the same multiple choice answers.  The logic of experimentation suggests that, with question wording controlled, we should consider a change in the students' answers as decent proof that the lesson accomplished its goals.

Obviously, this example simplifies program evaluation greatly.  To really be sure that Mrs. Jackson's lesson caused a change in self-esteem, we need to account for numerous other factors.  More rigorous methods are available, but schools cannot produce ideally scientific data for the millions of tiny decisions they must make each day.  However, some concepts from the social sciences, like defining one's research question and pre/post tests, can help school leaders improve the quality of evidence available to them.

If you want to learn more, please take a look at my slides from the NASC presentation or contact AEM.

Elizabeth Sobka