Monday, October 11, 2010

Testing troubles (in New York)

The Times has a long report up about the assessment uproar in the Empire State.

The basics seem to be that their state department raised the bar this year, so that students who would likely have been counted as proficient on recent past tests are not counted as proficient now.  Where New York City in particular had been celebrating rapid improvement in proficiency levels, the shift came as a shock.

Behind the basics, it sounds like this year's change was driven by weakness in the earlier test design.  The state aimed for a very short and narrow assessment, where everyone knew which content would be included and students could finish quickly. The state also aimed for transparency, releasing each year's test soon after it was given.

As a result, the quick route to higher scores was to drill students on the exact items and other very similar work--and that only equipped students to succeed on that exact test.  When the state switched assessments this year, much of the apparent gain did not carry over.

The most important Kentucky lesson may be about raising standards.   New York in 2010 is working through a shift like the one we have planned for 2012: we're going to test our students against higher standards and need to expect the reports to show lower levels of success.

New Yorkers appear widely shocked by what they've learned from the process.  In Kentucky, we should be aiming for our colleagues and neighbors to be well prepared for the change, expecting for the new results to be a frank statement of the challenges we need to meet in the coming years, but entirely prepared to face those "brutal facts" and get to work changing them.

1 comment:

  1. The NY issue also demonstrates the arbitrariness of criterion-referenced cut points and "attainment" based ways of looking at student data.

    These widely-used methods of saying what % of students are "proficient" have been used to unfairly vilify schools that serve the neediest students. This borders on the immoral (in my opinion).

    These same schools that appear to be failing when an attainment based approach is used may actually be (and frequently are) getting tremendous success with kids when a growth or value-added metric is applied. Not that these methods are perfect, but at the school level they are far superior to simple attainment-based approaches.

    We should have no problem with raising academic standards and encouraging instruction to meet those high standards. We should have a problem with arbitrary cut points and unsophisticated data analysis.

    Thanks much for this piece - I enjoy your blog.

    Jason Glass
    Columbus, OH

    ReplyDelete

Updates and data on Kentucky education!