Research & the ESSA Pt1: Are states ready?

Ever since President Obama signed the Every Student Succeeds Act (ESSA) into law last month, the education community has been trying to figure out what it all means. The ESSA is 391 pages long and touches on every conceivable aspect of K-12 public education, so the comprehension process is likely take months, if not years.
Of particular interest to AEM are the new federal standards for research quality. The ESSA calls for states to adopt only programs and interventions that are “evidenced based.” To determine what counts as evidence, the federal government offers guidelines with four tiers. “Strong” evidence comes from “a well-designed and -implemented experimental study, meaning a randomized controlled trial.” “Moderate” evidence can come from quasi-experimental studies that use student data and large N quantitative methods. “Promising” evidence includes correlational studies that control for differences between students who do and do not participate in the program/intervention. The fourth, lowest tier comes when “a state or provider can show that a program’s rationale is based on high-quality research.”
Longtime observers of federal education policy will note that the new standards are very different from those in place since No Child Left Behind (NCLB), in at least two key ways. NCLB required programs be supported by “scientifically-based research.” Relatively few programs and interventions ever offered proof of their effectiveness at NCLB’s high threshold. The ESSA allows states more leeway to accept less rigorous evidence in the hopes that they can consider a greater variety of solutions. As indicated, the ESSA also gives states the primary role of determining which research met those standards.
Can State DOEs handle it?
Whether states are prepared to assume responsibility for judging quality research is an open question. As EdWeek points out, since NCLB’s ratification many state Departments of Education (DOEs) have greatly increased their capacity to understand and even perform high-quality research, but that capacity still varies greatly from state to state. In order to adequately fill their new role, laggard states will need to make a serious push to hire qualified personnel.
Even DOEs with strong research capacity may fail to meet their new responsibilities. Currently, my research colleagues in DOEs spend most of their time on the crisis of the moment. High-ranking officials want fresh results for a public presentation at a moment’s notice. Ten schools scattered across the state cannot access their electronic standardized tests on test day. A superintendent does not know how to access or interpret her teacher evaluation ratings. In these and a million other scenarios, understaffed DOEs have made their researchers the first responders. If DOEs hope to meet ESSA’s expectation of them, they need to shift many of these responsibilities away from the research staff.
Good program evaluation takes time and serious effort. First, qualified researchers must help officials define a research question and design a study that can provide adequate answers. They must construct an adequate measuring instrument and oversee its administration to an entire state’s worth of children. They require weeks (minimum) worth of time to properly analyze results and prepare summaries for different audience. Properly evaluating the efficacy of even one large-scale state-level intervention could easily occupy most DOE research departments, at least at their current size, for an entire school year.
The Other Alternative
The authors of the ESSA may not intend for state DOEs to conduct all efficacy research themselves. Instead, they may intend DOEs to either outsource to third parties (ahem) or rely on research that vendors provide on their product. Later this week, I will discuss why the combination of private research efforts and the ESSA’s four-tiered system of evaluating rigor may lead to trouble.

Elizabeth Sobka