In 2007-08, the Department organized a pilot version of the review process for arts and humanities. From every account I've heard, it was a serious process, requiring a meaningful investment of school and district time and pushing toward an important upgrade in the quality of arts teaching and learning. Here's a description (taken from "Preliminary Information--Part 1" here):
The program evaluation/review process involved a year-long self-evaluation conducted by a team representing all stakeholders at the school. The school team followed a protocol provided by the Kentucky Department of Education that included guidelines for establishing the school program review team, training the team, how to create the mandatory evidence files, and the definition of acceptable evidence along with the process for collecting it.Nothing in that sounds like a soft or easy process. Here's hoping the official versions we implement in the coming years match it in rigor and attention to staff capacity.
Teams used a program evaluation/review tool to measure the level of program implementation against standards and indicators of quality. During the school year the team was assigned the collection of hard evidence that demonstrated how the school’s instructional program compared to the quality descriptions provided in the program review tool. Ratings of Performance describe implementation at four levels: little or no implementation, limited or partial implementation, full implementation, and exemplary implementation.
The evaluation team met regularly to compare evidence and work toward consensus on a rating for their school program. They decided on a level and the corresponding number for the rating was automatically assigned to the raw score. In this manner the evaluation tool and review process generated a raw score that can be converted into the school accountability score, or accountability index.
The program review process enabled schools to identify their strengths and weaknesses. Immediate feedback was provided to the school, providing direction for creating plans to improve the program as the school learned exactly what was needed to make improvement. The process included support for job embedded professional development while also providing a road map for improvement of the instructional program.
A system of audits was used to support the validity of the review process. Audit teams had access to the school evidence file and program review results, and made comparisons between the school’s determined score and the hard evidence collected to support those decisions. Audit teams followed a prescribed process to determine whether or not the school accountability score was accurate.