Protected: Charter Schools: A-F Data after 12 Years

March 6, 2015

This content is password protected. To view it please enter your password below:


  • cladwig says:

    From Eric Schansberg, Ph.D.:

    Tim Ehrgott has done a fine job with a limited form of statistical analysis. Let me comment on his work, extend it a bit further, and then explain how difficult it is to measure these things well.

    Ehrgott starts by discussing the simplest comparisons between non-charter public (NCP) schools and charter public (CP) schools, using Indiana’s A-F “school grading” system. With these broad comparisons, CP schools fare poorly.

    But populations at CP schools are not nearly the same as those at NCP schools, so one is left wondering whether we’re comparing apples and oranges– or apples and rocks. (This is similar to other popular but facile comparisons– e.g., between the average income of men and women, looking at all men and women while failing to account for differences in other variables, such as level and type of education, number of hours worked, experience, etc.) Such analysis is not only simple, but obviously simplistic.

    Ehrgott improves on this by looking at correlations between school grades and one key variable at a time– “Free and Reduced Lunch” (SES) or “Ethnicity”– for CP and NCP schools. If schools have a similar population (as defined by SES or Ethnicity), how do they perform? Then, he looks at the correlation between school grades and two key variables: SES and Ethnicity. Finally, he analyzes Marion County separately, restricting the data set– or in a sense, he looks at two more pairs of variables (location along with SES or Ethnicity). In all of these cases, using more rigorous analysis, NCP schools still seem to outperform CP schools. But we know that there are other variables at play.

    Another concern is the arbitrary reclassification of “continuous variables” into categories. Explaining this in English: in turning all B’s into a single category called “B” (whether a low B or a high B), we’re treating all B’s as equivalent. Likewise, when you treat all members of the 5th quintile of SES the same, you’re implicitly saying that an SES of 81% is equivalent to an SES of 100%. In a word, when you reduce an entire grading scale to five grading categories– and a full range of 0-100% to five quintiles– you necessarily suppress quite a bit in the data.

    Fortunately, there are more sophisticated methods to deal with these limitations. Multiple regression models allow one to assess the quantitative impact of multiple variables and to take advantage of continuous data. When I saw Ehrgott’s paper, I was excited about the opportunity to bring my skills to the project and see if the results would differ.

    The good news for Ehrgott’s analysis is that his results hold with the more sophisticated analysis. SES and Ethnicity are more impressive variables, statistically– but being a CP school in Indiana turns out to be “statistically significant” and negatively correlated with school grades. The bad news is that my more sophisticated analysis still does not inspire much confidence.

    Let me offer a number of caveats– to my analysis and Ehrgott’s:
    1.) We’re assuming that the State’s grading scale is reasonably accurate– and at least, unbiased. If CP schools are routinely graded low– because they are charters– then the results are being influenced by a huge missing variable.
    2.) We’re assuming that the State’s grading scale is a reasonable measure of the “quality” of a school. Beyond that, it would be a mistake to ignore other considerations. First, CP schools provide choice to parents and children– which is valuable in itself. (If parents are choosing CP schools, they must perceive that it’s a good decision for their children– on some metric, probably something we’re not measuring.) Second, CP schools receive far less funding. They may well be more efficient than NCP schools. And they might perform better with more equitable funding. (Opponents of CP schools often claim that funding is a crucial factor, so I’m confident that they are sympathetic on this point!)
    3.) In these results, the identity of the authorizer does not seem to matter. The larger authorizers have similar results and the smaller authorizers do not present enough data to analyze them separately.
    4.) These are only general results. So, any given CP school– or any given authorizer– could be relatively effective. Perhaps Indiana’s charter legislation is relatively ineffective. Perhaps CP schools in Indiana have chosen an ineffective approach for some reason. And so on.

    A far larger concern: The multiple regression model has a “low R-squared”. In English: the variables in the model do not explain much of the variation in school grades. This shouldn’t be all that surprising. Surely, many other variables matter– beyond ethnicity, SES, whether one is a CP or a NCP school, and which authorizer is used. (In fact, opponents of CP schools are fond of telling us this when we try to measure their effectiveness!)

    For example, are CP schools with “Education Management Organizations” (EMO’s) more or less effective? Are CP schools more effective with K-5 than middle school or high school? Are CP schools concentrated in areas with high concentrations of family instability or low levels of parents’ education– important variables not included in our data? If we don’t (or can’t) identify and measure those variables, then the model will be (far) less impressive. This is a limitation of the available data and the nature of a question that is difficult to quantify.

    It is likely that these other variables would (and should) carry the “weight” this analysis ascribes being a CP or a NCP school. If a missing variable correlates with being a CP school, then the “real” explanation could be the missing variable, rather than whether the school is a CP or a NCP school. As another example, CP schools may provide more competition for NCP schools, encouraging improvement in NCP schools that we would not be measuring here.

    Finally, as I’ve indicated above, school grades are not a particularly impressive way to measure quality. Far better than what we’ve done: using data at the individual student level, over time, based on more objective and specific forms of evaluation (e.g., standardized test scores). The best research on educational success will look at individual students, holding all of these variables constant, measuring each student’s year-to-year improvement in standardized test scores at CP and NCP schools.

    As you might imagine, these data are much more costly to collect– and thus, relatively rare. It follows that the best research– and really, the only research worth getting excited about– is also rare. Academic research of this quality has shown slight improvements in some categories for CP schools– providing greater contributions than Head Start, but not nearly what CP school proponents would hope. (CREDO– the Center for Research on Educational Outcomes at Stanford University– is committed to doing excellent and comprehensive work in this arena. For example, see: “National Charter School Report, 2013”.) If CP or NCP schools in Indiana have the relevant data sets and want to make them available, let me know. It would be awesome to analyze it and add something of far greater value to the literature.

  • […] really says something when the conservative Indiana Policy Review publishes a lengthy article that essentially declares Indiana’s charter school experiment a […]

  • Leave a Reply