Thursday, October 3, 2013

A First Look at 2013 Results

In its second year, Kentucky's new accountability system shows the state as a whole moving up about two points when elementary, middle, and high school are combined, but the three levels shown separately indicate quite different paces of improvement.

In the graph above, the numbers are Overall Scores on a 0-100 scale.  Notice that the statewide elementary results moved a tiny 0.3 points, while high schools moved up 3.7 points. It's a difference worth a deeper look at the components that make up the Overall Score.

For elementary schools, the Overall Score has three components, shown below along with the Overall result that combines all three:
The best growth here is in Gap results, reflecting results in six tested subjects for students with low incomes, disabilities, limited English proficiency, or Hispanic or African-American backgrounds.  Because that group moved up 1.6 points, it's getting closer to the Achievement result that reflects all students.  That gap-closing element is a bright spot in an otherwise worrisome pattern.

For middle schools, the Overall Score includes the three elementary components plus a Readiness score based on the Explore test created by ACT, Inc., and results break out this way:
Again, the Gap group moved more quickly than Achievement for all students, but Achievement also showed a visible step up at this level.  Readiness moved even faster.

Finally, in high school, the state adds in Graduation results, for this set of developments:
Yet again, the Gap group moved faster than Achievement for all students, and seeing that three times makes me think the policy decision to count Gap separately may truly have encouraged some added attention to those students.  Readiness shows impressive growth.

Graduation also looks very good when shown this way, but there's a big caveat: most of that improvement comes from a change in how we measure that rate.  I'll post more on that point next.

What about Growth?  I deliberately left that until last in this analysis, because I don't think those numbers reflect real change. Growth data is based on whether each kid's scores  this year are in the top 60 percent of kids who had similar scores last year.  Kentucky defines expected growth as being in that upper 60 percent of students who scored alike last year.  So by definition, the statewide Growth score is going to be close to 60 percent every year.  It usually won't be exactly 60 percent, because the numbers of kids will rarely be right to yield precisely round percentages.  But the little variations aren't likely to show anything at all about whether there was more or less growth statewide.  For individual schools, Growth above 60 is possible and signals better progress than the state as a whole, and Growth well below 60 is also possible and a sign of less improvement.  But for the whole state, that indicator tells us very little.

Here are some questions I'm puzzling about, shared for anyone who has insight or other ways to think about the issues:
  • Why did elementary schools show so little change in Achievement for all.
  • What did all levels do that helped the Gap Group show quicker movement?
  • What part of high school Readiness is due to changed ACT results and which part to growing participation in the other tests that also count for identifying kids as ready for college, career, or both?


No comments:

Post a Comment

Updates and data on Kentucky education!