Tuesday, December 21, 2010

Urban districts that deliver (from Broad)

Of 100 large urban districts, which ones deliver achievement above state average for historically disadvantaged students ?

The Broad Foundation did the math and shared its findings. Here's who delivered a higher percent of students meeting state standards than their own state's average for other schools and districts:

African-American, Hispanic, and Low Income Students Above State Average
Elk Grove, CA
Gwinnett County, GA (Winner of the 2010 Broad Prize)
Long Beach, CA
Northside,TX

African-American and Low-Income Students Above State Average
Cobb County, GA
Fairfax County, VA
Garland, TX
Mobile, AL
Montgomery County, MD
San Diego, CA

African-American and Hispanic Students above State Average
Broward County, FL
Charlotte-Mecklenburg, NC
Killeen, TX
Riverside, CA

Hispanic and Low-Income Students Above State Average
Garden Grove, CA
Mesquite, TX
Socorro, TX
Ysleta, TX

African-American Above State Average
Corpus Christi, TX
El Paso, TX
Guilford County, NC
Wake County, NC

Low-Income Above State Average
Alief, TX
Mesa, AZ
San Francisco, CA

Hispanic Above State Average
District of Columbia (compared to "any other schools," including charters)
Miami-Dade County, FL
Saint Paul, MN
Seattle, WA

New York City, Brownsville TX, and Aldine TX were excluded from the analysis because they were the three most recent winners of the Broad Prize and ineligible for the 2010 award.

Monday, December 20, 2010

It happens in teams (Weingarten on football)

In Newsweek's new joint interview of Bill Gates and the AFT's Randi Weingarten, I love this paragraph:
Football teams do this all the time. They look at the tape after every game. Sometimes they do it during the game. They’re constantly deconstructing what is working and what isn’t working. And they’re jettisoning what isn’t working and building up on what is working, and doing it in a teamlike approach. We never do that investment in public schooling. What’s happening in Finland is they do that investment in the graduate schools of education before people become teachers. They recruit a very select group of people who become teachers. Now it is also true that Finland has a 5 percent poverty rate and the United States has a 20 percent poverty rate. But there’s this notion of really figuring out what the best teachers do and trying to scale that up.
That's from Ms. Weingarten, not Mr. Gates.  It's a great interview, with both voices focused on stronger education for America's students.

He's got a sound point about making annual formal evaluations more substantive, reflecting the Gates Foundation's deep investment in measuring effective teaching--an investment that even includes new video tools that produce something very like the game tapes used in football.

She's got a sound point, too.  Her focus is on year-round cycles of continuous improvement organized primarily by professional colleagues. At the school level, teams that work together to build skills and raise results are often called professional learning communities, and attempts to measure that sort of environment are made through surveys on working conditions.  She's also arguing for similar loops on a larger scale, both for preparing teachers and for defining standards, curriculum, and testing methods.

While both takes are important and the two strategies certainly should be deployed, Ms. Weingarten's emphasis on the continuous, team-based feedback process looks to me like the more important part of the total change we need to create.

Sunday, December 19, 2010

Slow blogging ahead

As the year ends, I'm watching the schools starting their breaks, the news slowing down, and my own house filling up with family, friends, and Christmas lights.  PrichBlog will be back with full strength on January 2, but until then posting may be pretty light.

Thursday, December 16, 2010

Kentucky Surges in National Board Certification

The Education Professional Standards Board shares the good news by press release:
Kentucky Ranks 6th Nationally in NBPTS Certification Results
Frankfort, KY.(December 15, 2010)- Teacher quality reaches a new milestone in Kentucky with today’s announcement by the National Board for Professional Teaching Standards (NBPTS) that 316 Kentucky teachers were awarded the prestigious National Board Certification® in the class of 2010. This achievement recognizes these educators as among the top in the nation and promises to improve student learning in classrooms statewide. Kentucky now totals 2,156 NBCTs who have ever attained National Board Certification and is ranked 12th nationally for the total numbers of NBCTs.
Kentucky consistently ranks among the top in the number of new NBCTs. “Kentucky’s National Board Program is considered a model by many other states,” said Dr. Phillip Rogers, Executive Director of the Education Professional Standards Board. “I commend these teachers for their extraordinary commitment to teaching and for achieving the highest credential in the teaching profession.” Upon successful completion of the National Board process, Kentucky teachers currently holding a Rank II certificate are eligible to apply for Rank I and may serve as mentors for new Kentucky NBPTS candidates.
The number of National Board Certified Teachers nationally has increased more than 90 percent over the past five years. Nationwide, 8,600 teachers achieved National Board Certification this year, bringing the total certified up to 91,000. The top ten states with the highest number of teachers achieving National Board Certification this year are North Carolina (2,227), Washington (1,272), Illinois (771), South Carolina (498), California (342), Kentucky (316), Maryland (302), Arkansas (290), Florida (273) and Oklahoma (225).
National Board Certified Teachers are changing the culture of learning in the classrooms, schools and districts. National Board Certification is voluntary and open to all educators who have a baccalaureate degree and three years of classroom experience in either a public or private school. National Board Certification, for veteran teachers, is a rigorous 10-part performance assessment that includes video portfolios, analysis of classroom practice and evaluation of content knowledge. Take One!, for educators at all levels, is professional development organized around the National Board’s professional teaching standards and core propositions. Teachers who have participated in National Board Certification have overwhelmingly stated it is the most powerful professional development experience of their careers.

Funding, curriculum, teaching: Darling-Hammond on high-achieving nations

In the search to understand countries with stronger education results than our own, Linda Darling-Hammond argues that “high flyers all have equitable funding, shared curriculum, and quality teaching.”

In the new issue of American Educator, Dr. Darling-Hammond writes:
These more equitable investments made by high-achieving nations are also steadier and more focused on key elements of the system: the quality of teachers and teaching, the development of curriculum and assessments that encourage ambitious learning by both students and teachers, and the design of schools as learning organizations that support continuous reflection and improvement. With the exception of a few states with enlightened long-term leadership, the United States, by contrast, has failed to maintain focused investments in any of these essential elements.
The result is that the United States is standing still while more focused and steadfast nations move rapidly ahead.
With ample footnotes, she goes on to argue that Finland, Singapore, and South Korea all:

  • “Fund schools adequately and equitably, and add incentives for teaching in high-needs schools.”
  • “Organize teaching around national standards and a core curriculum that focus on higher-order thinking, inquiry, and problem solving through rigorous academic content.”
  • “Eliminated examination systems that had once tracked students into different middle schools and restricted access to high school.”
  • “Use assessments that require in-depth knowledge or content and higher-order skills.”
  • “Invest in strong teacher education programs that recruit top students, completely subsidize their extensive training programs, and pay them a stipend while they learn to teach.”
  • “Pay salaries that are equitable across schools and competitive with other careers, generally comparable to those of engineers.”
  • “Support ongoing teacher learning by ensuring mentoring for beginning teachers and providing 15 to 25 hours a week for all teacher to plan collaboratively and engage n analyses of student learning, lesson study, action research, and observations of one another’s classrooms, which help them continually improve their practice.”

Her article provides rich details for each strategy and important options for future state and national education strategies.  I recommend the full document highly as a starting point for thinking about 2011 and beyond.

Tuesday, December 14, 2010

Measuring effective teachers: the student component

Student perceptions are a serious part of the the giant Measuring Effective Teachers project, already discussed in recent posts here, here, and here.  For that part of the analysis, the study used the Tripod survey, an instrument developed by researcher Ron Ferguson to look at "the extent to which students experience the classroom environment as engaging, demanding, and supportive of their intellectual growth."  The initial MET report confirms the predictive value of that student data:
When a teacher teaches multiple classes, student perceptions of his or her practice are remarkably consistent across different groups of students. Moreover, student perceptions in one class or one academic year predict large differences in student achievement gains in other classes taught by the same teacher, especially in math. In other words, when students report positive classroom experiences, those classrooms tend to achieve greater learning gains, and other classrooms taught by the same teacher appear to do so as well.
Student feedback need not be a popularity contest. We asked detailed questions about various aspects of students’ experience in a given teacher’s classroom. Some questions had a stronger relationship to a teacher’s value-added than others. The most predictive aspects of student perceptions are related to a teacher’s ability to control a classroom and to challenge students with rigorous work.
Students’ perceptions have two other welcome characteristics: They provide a potentially important measure that can be used in nontested grades and subjects. In addition, the information received by the teacher is more specific and actionable than value-added scores or test results alone.
Those results also illustrate two other important aspects of the MET work in progress.  First, the study is looking at how well various indicators predict teachers' ability to raise student results, aiming to find combinations of multiple elements that do even better than the strongest individual components.  Second, the study is looking for useful feedback that gives a teacher concrete ideas of how to change strategies to get higher future results, going beyond simply reporting on whether past results were high enough or added the desired level of value to students past achievements.

Monday, December 13, 2010

Measuring effective teachers: more on who's doing the work

The Measuring Effective Teachers Project involves teachers from Charlotte-Mecklenberg, Dallas, Denver, Hillsborough County (which includes Tampa), Memphis, and New York City, and the work is directed by Thomas Kane and Steven Cantrell of the Bill & Melinda Gates Foundation.

The MET report issued last Friday identifies a wide array of other experts contributing to the effort.  I'm not familiar with every name, but I do see very nearly all the organizations and individuals I've heard Kentucky educators mention as insightful scholars and effective organizations working in this field.

As "lead research partners," the report lists:
  • Mark Atkinson, Teachscape
  • Nancy Caldwell, Westat
  • Ron Ferguson, Harvard University
  • Drew Gitomer, Educational Testing Service
  • Eric Hirsch, New Teacher Center
  • Dan McCaffrey, RAND
  • Roy Pea, Stanford University
  • Geoffrey Phelps, Educational Testing Service
  • Rob Ramsdell, Cambridge Education
  • Doug Staiger, Dartmouth College
 As "key contributors," the report identifies:
  • Joan Auchter, National Board for Professional Teaching Standards
  • Charlotte Danielson, The Danielson Group
  • Dan Goldhaber, University of Washington
  • Pam Grossman, Stanford University
  • Bridget Hamre, University of Virginia
  • Heather Hill, Harvard University
  • Sabrina Laine, American Institutes for Research
  • Catherine McClellan, Educational Testing Service
  • Denis Newman, Empirical Education
  • Raymond Pecheone, Stanford University
  • Robert Pianta, University of Virginia
  • Morgan Polikoff, University of Southern California
  • Steve Raudenbush, University of Chicago
  • John Winn, National Math and Science Initiative

Sunday, December 12, 2010

Averaging 2006, 2007, and 2008 results, Towson University in Maryland graduated:
  • 67 percent of its white students
  • 67 percent of its African-American students
  • 70 percent of its Hispanic students
That's gold-standard work, well above the national average 55 percent and with no gap between white and black results.  Today's Washington Post explores how Towson makes it happen:
In 10 years, according to school data, Towson has raised black graduation rates by 30 points and closed a 14-point gap between blacks and whites. University leaders credit a few simple strategies: admitting students with good grades from strong public high schools, then tracking each student's progress with a network of mentors, counselors and welcome-to-college classes.
"Regardless of your background, there's people here for you who understand what you're going through," said Kenan Herbert, 23, an African American Towson senior from Brooklyn, N.Y.
Towson's president sounds like my favorite Kentucky superintendents when he takes institutional responsibility for institutional outcomes:
"The goal has been, if you take them in, you should graduate them," said Robert Caret, Towson president since 2003.

Measuring effective teachers: first findings

The Measuring Effective Teachers Project published some initial results Friday.  This giant project involves almost 3,000 teachers in seven urban school districts who volunteered to test themselves, test their students, survey their students, and be observed by video and scored against an array of different evaluation rubrics.  From the first year of work, the project reports four major findings:
First, in every grade and subject we studied, a teacher’s past success in raising student achievement on state tests (that is, his or her value-added) is one of the strongest predictors of his or her ability to do so again.
*     *     *
Second, the teachers with the highest value-added scores on state tests also tend to help students understand math concepts or demonstrate reading comprehension through writing.
*     *     *
Third, the average student knows effective teaching when he or she experiences it.
*     *     *
Fourth, valid feedback need not be limited to test scores alone. By combining different sources of data, it is possible to provide diagnostic, targeted feedback to teachers who are eager to improve.
It's a little disappointing that the first report shares nothing from the video observation part of the project, especially after the method (as opposed to its results) so recently drew national coverage.  Instead, we'll need to wait until:

  • Late spring 2011 for findings from the classroom observations.
  • Late summer 2011 for a proposal on how to weight the various factors in stronger overall evaluations
  • Early 2012 for reports on how participating teachers changed student achievement when working with students assigned to them by random sample. (This first report used data from students assigned to each teacher by their own school’s regular approaches, both during the study and—where possible—during the previous school year.)

Friday, December 10, 2010

Kentucky's higher education puzzle?

How can our spending per student be in the top quarter of states, and our graduation rate in the bottom quarter?  That's what I see in the maps below, created by www.higheredinfo.org with data and details here and here.  There may be good reasons for these results –and there's certainly good reason to encourage public discussion of how both families and taxpayers can get a better return on their higher education investments.


Thursday, December 9, 2010

The beginning of stable standards?

Today's Messenger-Inquirer editorial expresses concern about Kentucky's repeated decisions to replace standards and assessment:
For years, educators repeated that 2014 date over and over again as the time when Kentucky would really be able to see the rewards of its education reform efforts. Only something happened along the way.
Whether it was political maneuvering, the fact it became clear too many schools would not reach that goal, or more likely a little of both, Kentucky decided to scrap the CATS test -- and essentially say the 2014 "finish line" was merely an oasis, and the state needed a fresh start on holding schools accountable.
The legislature voted to devise a new test and a new accountability system, and it's this system that Holliday now wants to use as a replacement for federal adequate yearly progress requirements.
But how long until these "new" standards are thrown out as well? What happens when the political winds shift again, or if the results don't paint the type of picture education officials are expecting?
Parents, educators, business leaders and anyone else who cares about the quality of education needs to have an accountability system that they can trust to actually have some meaning. That doesn't mean that tests and standards can't be tweaked to reflect changes in core content or student demographics.
But it's time to determine what the standards will be, set goals with firm deadlines, and then continue on that path long enough to determine which schools are succeeding, which are broken, and what needs to be done to fix them. Otherwise, we're just left with a bunch of test scores, but no real accountability.
The concern about moving the goal posts is well taken--and a key reason why the Prichard Committee, the Council for Better Education, and the Kentucky Association of School Councils continue to offer Transition Index data that comes as close as possible to sustaining our old accountability system until the new one kicks in.

Still, the future may be better than the past on this issue, because our next system will have two new built-in reasons to stay focused and avoid frequent changes in direction.  First, there is lots of public support for the fact that our new literacy and mathematics standards will be consistent with those used  more than forty other states, making it hard to explain going back to a one-state approach.  Second, by 2015, we expect to share a testing system with many of those other jurisdictions, giving us results we can compare nationwide and lots of cost-savings into the bargain, and that too will be hard to give up.

Both factors will promote a stable system of goals and consequences, allowing the main debate to be about the best ways to ensure that all schools move steadily toward delivering for all students.

An end to AYP?

Once Kentucky adopts a new state accountability system, Commissioner Holliday plans to ask federal permission to use that system in place of the No Child Left Behind rules requiring Adequate Yearly Progress or AYP  for each student demographic groups served in each school.  Brad Hughes of the Kentucky School Boards Association reported earlier this week on how Dr. Hollidays shared the idea with the KSBA Board:
“We want a better measure than AYP,” the commissioner said. “We’re pushing hard to get a comprehensive accountability model that measures not only the proficiency rates, but also closing achievement gaps, tracking every student’s growth, recognizing teachers whose students grow one or two grade levels when they were three grade levels behind to start with. We’ve been calling those folks failures. We need to praise them and tell them what a great job they’re doing in helping grow the children.”
In an interview following his remarks to the KSBA board, Holliday said he’s very optimistic that the AYP waiver can be obtained.
“We think (the Obama administration) is very open to replacing AYP,” he said. “We think Kentucky will be the first state to take that waiver request forward, but we think there will quite a few others. It’s all based on the new common core standards and growth models that could replace AYP.”
Notice how this idea relates to our new, more demanding content standards.   Higher standards almost certainly means that fewer students and fewer schools will measure up in the first few years of our new system.  In seeking the waiver, Kentucky can argue that our new goals will be tougher than the NCLB expectations as well as fairer.

Tuesday, December 7, 2010

ACT hints at scale of Common Core challenge

When we start assessing students against the Common Core Standards, what kind of results will we see?   ACT, Inc., has just offered a set of blunt estimates.  Using results from the states where all students participate in the ACT (including Kentucky), the report projects that in first literacy testing of 11th graders:

  • 38 percent will meet the new standards in reading.
  • 51 percent will meet the new standards in writing.
  • 53 percent will meet the new standards for language.
Since the Common Core calls for added focus on informational text and on literacy skills that work for specific fields of study, the report also offered estimated results for 11th grade results in those subjects, estimating that:
  • 24 percent will meet the standards for literacy in science
  • 41 percent will meet the standards for literacy in social studies.
  • 38 percent will meet the standards for informational text.
  • 37 percent will meet the standards for literature.
Mathematics is exactly as grim, with 11th grade projections that:
  • 34 percent will meet the mathematics standards for number and quantity.*
  • 42 percent will meet the mathematics standards for functions.
  • 37 percent will meet the mathematics standards for statistics and probability.

The rationale for the Common Core has always been that American schools need to aim higher and American students need to achieve at higher levels.  This preliminary study provides a first glimpse of how much work we have ahead.

* The ACT mathematics categories come with short explanations of what's in each subdomain.  Number and quantity includes the real number system, quantities, the complex number system, and vector and matrix quantities.  Functions includes interpreting functions; linear, quadratic, and exponential models; and trigonometric functions.  Statistics and Probability includes interpreting categorical and quantitative data; making inferences and justifying conclusions; conditional probability and the rules of probability; and using probability to make decisions.

Sunday, December 5, 2010

NYTimes spots Gates video effort, omits research purpose

Friday's New York Times describes the video technology being used for the Measuring Effective Teachers initiative funded the Bill & Melinda Gates Foundation.  It's an interesting piece about the efficient use of 360-degree cameras to get a panoramic idea of how everyone in the room is affected by a given day's lesson, and the reporting shared several expert opinions on how the technology might move into direct use of teacher evaluations.

That said, I see the article as focusing on the video than on the bigger project the video will support.  Months back, I blogged about the Foundation's own description of the project, which describes the video cameras as a way to allow multiple observers apply several different respected rubrics for classroom observation.  Live observations on the needed scale would be unduly disruptive of ordinary classroom work, which is why the camera strategy was created.

Along with the varied ratings of the videos, the study is gathering data from student surveys about classroom processes, teacher surveys about working conditions, and teacher tests of content knowledge and content teaching methods.

Next year,  each of those indicators will be correlated with several kinds of data on student growth compared to their achievement this year.

With that huge collection of data, the project will be able to offer answers on a really big question:  How well does  each of the gathered indicators (teacher knowledge, surveys, and the different observation protocols) relate to actually delivering student growth?


The NYT coverage could leave the impression that the main point is finding a way to videotape teachers.  Far from it. The video is mainly a tool for developing richer and more important insight into which measurement tools best identify teaching that changes student performance.

Note: The Prichard Committee and I personally have not been involved in the implementation of this Effective Teachers part of the Gates Foundation's education investments.  However, we are working on several of the Gates Foundation's College-Ready Work initiatives (see information here, here, and here), and I have been included in several briefings on the progress of this measurement effort.

Has the C-J spoken incorrectly on a second factual issue (JCPS novice results)?

In an earlier post today, I explained why I believe the Courier-Journal's editorial this morning incorrectly described Jefferson County's proficiency trends. In this post, I turn to another claim in the same editorial, this time the one that says "students rated novice have dropped sharply."

For that statement as well, I respectfully submit that the editorial has not accurately described the facts.

Far from dropping sharply, the percent of Jefferson students scoring at the novice level increased from 2007 to 2010 at every level in reading, mathematics, science, and social studies and at the elementary level in writing. The only novice results that have a net three-year decline are middle and high school writing, and while the 14.5 percent decline for high schools is a large one, the 0.24 percent shift for middle schools is not a drop to which the modifier "sharp" can reasonably be applied.

The table below shows the full results, highlighting individual years when the percent novice went up in each subject as well as the 2007 to 2010 net increase in novice performance in nearly all subjects:


As I wrote in the earlier post, I will, of course, gladly consider any data the Courier-Journal may have relied on and will update this report if I have overlooked a way to analyze the results which would justify the claim that novices "dropped sharply."  Unless and until I see such an analysis, I respectfully submit that Jefferson County has seen a net increase, not a sharp drop, in students scoring at the novice level in nearly every subject at nearly every level.

Source note: The data reported above came from the Kentucky Department of Education's 2009-10 Interim Performance Report: Jefferson County Public Schools, run date 11/2/2010, available here, with my arithmetic combining the proficient and distinguished percentages shown here

Has the Courier-Journal spoken incorrectly on a point of fact?

I believe the Courier-Journal included an untrue statement in this morning's "Stop and Think" editorial.  Addressing the Jefferson County School Board's decision not to renew Superintendent Berman's contract, the editorial asserts that "percentages of students testing proficient in basic academic skills have risen steadily."

I respectfully submit that since 2007, proficiency levels in Jefferson County Public Schools have risen steadily only in high school writing.  In every other tested subject at every level, proficiency declined in one or more of the last three years.  Looking at the three-year change, proficiency is flat for elementary reading, and down for mathematics in high school, writing in elementary and middle schools, and science and social studies at all three levels.  The district saw net three-year proficiency gains only in elementary and middle school mathematics and in high school writing. I see no reasonable way to characterize Jefferson County proficiency results as having "risen steadily" in recent years.

Here are the district's elementary level results for reading and mathematics for the last four years:
Reading results went down in 2008 and 2009, with a 2010 return to exactly the 2007 level.  Mathematics results went up in 2008 and down in 2009 and 2010, with a net increase of 1.34 percent compared to 2007. Neither trend could be called rising steadily.

Here are the middle school results, showing reading results going down, down, and up, with a net decline of 1.23 percent from 2007.  Mathematics results went up, up, and down, with a net gain of 1.41 percent compared to 2007. 
At the high school level, reading went up, down, and up, with a net result 0.53 percent lower than 2007.  Mathematics went down, up and down, with a net result 3.26 percent lower than 2010.

Overall results, combining all three levels, show reading results down, down, up, with 2010 results 0.54 percent lower than 2007.  In mathematics, the trend was up, up, and then down, with the 2010 results 0.92 percent higher than 2007.


For science and social studies, 2010 results were lower at every level than in 2007.  For writing, 2010 results were lower at the elementary and middle levels but higher for high school, as shown in the table below.

I will, of course, be delighted to consider any data the Courier-Journal may have relied on and to update this report if I have overlooked a way to analyze the results which would justify the editorial's factual assertion.  Pending seeing such an analysis, I respectfully submit that Jefferson County "percentages of students testing proficient in basic academic skills" have been mostly stagnant or in decline in recent years.

Source note: Nearly all of the data reported above came from the Kentucky Department of Education's 2009-10 Interim Performance Report: Jefferson County Public Schools, run date 11/2/2010, available here, with my arithmetic combining the proficient and distinguished percentages shown here.  The one exception is that the graph combining all three levels uses data from the Kentucky' Department of Education's No Child Left Behind Adequate Yearly Progress Reports for 2007, 2008, 2009, and 2010, available here.

Saturday, December 4, 2010

Graduation progress (counted the federal way)

Using the federal government's Averaged Freshman Graduation Rate (or AFGR) method, Kentucky moved much closer to national average between 2003 and 2008.  Over the same years, only nine states showed greater graduation improvement using that approach to measuring  graduations. The rates themselves look like this:

The AFGR method uses an estimate of the number of students who enter ninth grade in one year and then graduate four years later.  Because many state data systems have been unable to separate first-time ninth-graders from repeaters, the method estimates the first-time count by averaging a year's ninth grade with eight grade from the year before and tenth grade from the year after.   The AFGR reflects the number of graduates in a given year divided by that averaged figure from three years earlier. Thus, the newer rates shown in the graph above come from this set of numbers:

Having shared that data, I'll note that the whole discussion of graduation rates is a troubled and confusing one and will stay that way until our student data system can indeed track a cohort from start of grade 9 to graduation.  Using the Infinite Campus data system, we expect that to be possible for the class of 2013.  

Until then, the AFGR approximation shows  Kentucky much closer to national average two years ago than we were seven years back.

Source note: Data for this post comes from the "Building a Grad Nation" report issued recently by America's Promise, with backup details from National Center for Education Statistics reports on "The Averaged Freshman Graduation Rate for Public High Schools From the Common Core of Data: School Years 2002–03 and 2003-04" and "Public School Graduates and Dropouts From the Common Core of Data: School Year 2007–08 First Look."

Next Accountability?

I've tried at length to develop a summary of the Department of Education's new accountability proposal.  So far, I haven't met my own standards for being both brief and complete, because the model has a lot of moving parts to it.

With a Kentucky Board of Education discussion slated for Tuesday, December 7, I should alert readers that at this link, they can download and do their own analysis with:

  • A white paper offering "Goals and Guiding Principles"
  • A narrative summary of the model for accountability for student results, which includes achievement, gap reductions, growth, college and career readiness, and graduation data.
  • A draft regulation to enact that model.

Kentucky's Next Giant Steps (a fresh edition)

We've had several editions of the PrichBlog one-page summary of Kentucky's big push to raise student standards and results, and it's time for an update that leaves out the earlier discussion of Race to the Top and adds in basics of our learning and testing strategies.  Here's the new version (with an option to download for easy printing here):

FOUR GIANT STEPS FOR KENTUCKY EDUCATION
DECEMBER 2010 OVERVIEW OF KEY DEVELOPMENTS

SENATE BILL 1
Senate Bill 1, passed in 2009, requires Kentucky to upgrade its standards for what students will learn. Our new law says the standards must be shorter, clearer, and better focused on students being ready for college, work, and global competition. To match the new standards, Kentucky will use new tests starting in the spring of 2012. Current teachers will receive specialized training on how to teach the new standards well, and teacher preparation programs will equip future teachers with the same skills.

COMMON CORE STANDARDS SHARED BY MANY STATES
For language arts and mathematics, Kentucky has adopted the new Common Core State Standards. The Common Core offer a grade-by-grade statement of what students will need to be on track for college-and-career-readiness when they finish high school. Because more than forty other states have adopted the Common Core, our expectations will be consistent with goals being used across most of the country, and as strong as learning standards used by the most competitive countries elsewhere in the world. Kentucky is also working with other states on shared science standards that should be available in late 2011, and on social studies standards that may take longer to complete.

ASSESSMENT FOR LEARNING STRATEGIES TO MEET THE STANDARDS
To meet those new college-and-career ready standards, teachers will need increasingly effective approaches to classroom work. One key strategy, called “assessment for learning,” uses classroom activities designed to identify next steps for each student to keep climbing toward the overall goal. When it is done well, assessment for learning makes classroom work more focused and effective, with students seeing each success as a reason to try even harder on the next set of work. Kentucky teachers from each school district are now studying those approaches in regional networks, and collaborating with local administrators to plan ways to share the methods with all their local schools. Teacher preparation programs are putting new emphasis on the same strategies. Research shows that the assessment for learning approach can have a big impact on overall achievement, with the most positive effect on the students who would otherwise be likely to fall behind.

NEW TESTING TO CONFIRM STUDENT SUCCESS ON THE STANDARDS
Kentucky will also use statewide testing to confirm that students are indeed on track to reach the new standards. The states that are using the Common Core Standards are also developing new methods to test and report student progress to parents, teachers, officials, and the general public. Those shared tests are being developed with large new federal grants and will begin in 2014 or 2015. For 2012, 2013, and maybe 2014, Kentucky will use a temporary test that matches the new standards but will not have all the strengths of the longer-term, multi-state testing methods.

Our new standards, classroom strategies, and statewide testing are all part of our Senate Bill 1 effort to deliver stronger results for all Kentucky students and build a stronger future for our entire state.

Back to blogging

I've just finished an amazing week of learning and building, working about 12 hours a day on exciting education projects.  Only, I didn't realize how intense it was going to be in advance, so I apologize for PrichBlog's unannounced silence.  I'll spend today blogging on some important developments.

Saturday, November 27, 2010

A compelling vision grows clearer

In 2000, Bob Sexton authored a great piece on "Engaging Parents and Citizens in School Reform," including this clear point on Prichard Committee strategy:
After the [1990] reform passed, reminding people about the original problem became the challenge. The strategy was to remind people about Kentucky’s historically low educational level and inject into the public bloodstream a compelling new vision —that of all children learning at high levels.
For 2010, I think that strategy still fits well, with three helpful changes in the terrain in which we need to apply it:
  • With the new Common Core standards, "learning at high levels" can be defined much more specifically as "becoming college and career ready."
  • The confidence that students can learn at those high levels can now be based on more complete statements of how can happen, grounded in the "sunlit vision" of assessment for learning practices that move students toward those standards.
  • The confidence that teachers can deliver that high level of learning can now be based on more concrete statements  about how working in teams organized as professional learning communities, educators can help all teachers grow increasingly effective in their chosen craft.
Of course, achieving that vision will require that Kentucky implement the standards, apply the assessment for learning practices, and cultivate those professional learning communities: that's the work immediately ahead of us all.   Still, it's good to see the strategy as still sound and the vision as actually growing stronger after another decade of work.

Source note: Bob's essay appeared in All Children Can Learn, edited by Roger Pankratz and Joseph Petrosko and published by Jossey-Bass.  Paperback and Kindle versions are available here, and most (though not quite all) of Bob's chapter is available on-line here.

Friday, November 26, 2010

Paying for Senate Bill 1 (the hard way)

Some Senate Bill 1 costs will be paid this year by cutting what Kentucky school districts receive for professional development.

SB 1 is the 2009 legislation that called for Kentucky to adopt new, college-and-career-ready standards, assessments to support those standards, and new efforts to prepare teachers to deliver on the standards for all students.  It set an important and positive overall direction for Kentucky education, but it came without added dollars for the added work that will be required.  For a little while, the federal Race to the Top competition also seemed like a way Kentucky might find the resources to implement the bill's mandates, but Kentucky did not win a grant in either the first or the second round of the competition.

According to a "Fast Five on Friday" message sent to superintendents last week, the Department reallocated state-level funds to meet most of the SB 1 costs for this fiscal year, but still found itself "short of approximately $2.6 million necessary to implement all required components."  Holding back the professional development dollars will fund those other components.

It's good to see SB 1 implementation moving forward, but this is a painful way to make it happen!

[Hat tip: KSBA]

Thinking about cost per pupil

Stretching the School Dollar is a 2010 volume that offers a set of recent proposals on "how schools and districts can save money while serving students best." One intriguing idea, from Marguerite Roza, offers a method rather an answer: she suggests analyzing districts, schools, programs, courses, and even individual sports on a per-unit or per-pupil basis. She gives many examples of how that approach helps everyone make comparisons and think more clearly about choices.

It's a convincing argument, and I'm going to try to apply it in blogging.  As a student count, I'll generally use the average daily attendance figure the Department uses for SEEK funding--though I'm open to reader arguments for using a different figure.

State K-12 funding for 2010-11, as budgeted in the spring 2010 special session, works out to per-pupil funding like this:
  • $3,763 for SEEK funding districts can use for overall schooling costs.
  • $2,011 for health insurance, life insurance, and retirement benefits for teachers and other school and district staff.
  • $371 for transportation.
  • $353 for new and renovated facilities.
  • $186 for categorical programs to provide targeted services to K-12 students: textbooks, gifted services, extended school services, family resource centers, and the like.
  • $89 for specialized schools, including the School for the Blind, the School for the Deaf, and vocational schools.
  • $37 for work that supports teaching quality, including district professional development, centers, and other programs and grants.
  • $27 for state assessments and interventions in schools with the weakest results.
  • $13 for the Education Professional Standards Board.
  • $8 for student support programs targeted for use in specific districts and regions.
  • $13 for other programs not listed above.
There's another $103 per pupil in the budget section for the Department of Education that I haven't fully figured out.  There are line items for KDE and technology and then there's an amount that isn't itemized that I'm sure includes costs of both KDE operations and technology and network expenses.


 As noted, that's a downpayment, but it does already give me a sense I didn't have before of how facilities and transportation now dwarf the funding for categorical programs to provide other services.

Feel free to download my state funding calculations and send me questions and improvement ideas, and stay tuned as I work some more on figuring out what can be learned from using this approach regularly.

Monday, November 22, 2010

Feedback for students, feedback for teachers

Yes, American public education can deliver student achievement at higher levels, with smaller achievement gaps, in the years ahead--and it won't depend on a strange visitor from another planet who comes to Earth with powers and abilities far beyond those of mortal men.

For students, what matters most is classrooms where they get frequent, usable feedback on their progress.  They need a solid understanding of the main standards they're trying to reach, a clear picture of the next rung they need to climb on the way toward the standard, and effective tasks that let them build the skills that will get them to that next rung.  In the literature, that goes by many names, from "differentiated instruction" to "assessment for learning.  The most helpful feedback is not a grade or a score, but a concrete description of improvement seen in recent work and manageable improvement targets for the work the student will do next.   That sort of feedback produces a virtuous spiral in which success in each effort builds confidence for the next effort.  That kind of instruction has a powerful record of raising achievement for all students, but it has the biggest impact on students who are the most likely to fall behind in other settings.

For educators working to create that kind of classroom, what matters most is something very similar:  a team of colleagues who provide each other with frequent, usable feedback.  That kind of collaboration creates a "professional learning community" or a "PLC."  In those settings, teachers can analyze students' work, talk through which teaching approaches have produced improved results, and figure out next steps to raise the results even higher.  The most helpful feedback comes steadily, over many weeks and months, as part of the ongoing collaboration of a strong team: formal evaluations matter, but they alone cannot provide the steady, persistent, reliable conditions for all teachers to grow in their craft.

Notably, both the student version (assessment for learning) and the teacher version (PLCs) depend on sustained, local effort.  Whether in neighborhood schools, regional schools, magnet schools, charter schools, large schools, small schools, and even virtual schools, live people have to do the work in small teams.

Crucially,  these approaches cannot work because a legislature, a superintendent, or a principal orders them to happen.  They work only when teachers understand and commit enough to get first results, and then develop further understanding and commitment because the results keep coming.

That is why, like bloggers around the country, I'm not waiting for Superman.

Our children's academic futures will depend on dedicated, able teachers pulling together in teams that provide the feedback they need to find the ways to give students the feedback that they, too, need to achieve at the high levels they will need for adult success.  In that equation, I'm confident that an overwhelming majority of American teachers have both the commitment and the capacity needed to succeed: if they can also receive consistent collegial support, I'm confident that both they and their students can achieve truly great things and deliver results more exciting anything D.C. Comics can dream up.

Sunday, November 21, 2010

Hypothetical: Could keeping students in school mask progress?

Since 1992, we know that Kentucky high schools have managed to keep more students in school, and we know they've shown sluggish progress on raising proficiency, especially compared to the lower grades.  Recently, it occurred to me to wonder if the two patterns might interact.  Here's an illustration of something that could happen:
Version 1 and Version 2 both show 100 students entering ninth grade.  

In Version 1, 40 of the students are gone by eleventh grade, either as official dropouts or as students the school only knows are no longer there. In Version 2, only 20 are gone.  

Now look at proficiency on eleventh grade testing.  15 of the remaining 60 students in Version 1 get there, or 25 percent.  20 of the remaining 80 percent get there in Version 2, also 25 percent.

And yet, I hope it's obvious that Version 2 is noticeably better.  More students are still in school, and more students are proficient or distinguished.  

I've added a wrinkle by showing that of the students who are not proficient, many more are in the apprentice category in Version 2: that's another way results can be better while percent proficient and above is unchanged.

The main idea I want to share, though, is that if a school simultaneously increases the number of students reaching proficiency and the number of students staying in school, the percent proficient could stay exactly the same.  The report could be "no progress," when the reality was better results for an important number of students.  

Ans the main thing I wish I could figure out is: what approach to the data would allow us to see if that, indeed, is a factor in the sluggish improvement of high school achievement defined in percentage terms?

Thursday, November 18, 2010

The ESS program: One-sixth of original size

Extended school services provides added instructional time who needed added support to reach state standards.  Created by KERA, it was originally budgeted to receive $53 million to support the program in fiscal year 1992.   That money was meant to support not only after-school tutoring, but strong summer school options.  Instead, we started pulling back almost immediately: faced with recession in 1992, the General Assembly pulled way back on that commitment and slashed the program to $29 million.  That lost funding never came back, and the robust intentions of the original program never came back either.

If we'd sustained that original buying power and kept up with inflation, the program would now have received $81 million in FY 2009, the most recent year for which we have a final spending figure.  Instead, ESS received just $14 million that year, a tiny fraction of the original plan for the program.



Source note: ESS funding numbers come from the annual supplemental information to the state's financial reports.  1992 to 2005 amounts were published in "A Glass Half-Empty or Half-Full? An Overview of the State School Funding Landscape in Kentucky, 1990-2008," a white paper Stephen Clements and I prepared for the Prichard Committee in 2008. 2006 to 2009 amounts were taken from online reports available here. The buying power of the original $53 million was worked out by adjusting upward by the Consumer Price Index each year.

Monday, November 15, 2010

Achievement gaps and formative assessment

Formative assessment is a core gap-reduction strategy.  Fairly often, my shorthand summary of the method is that it's "the kind of teaching that raises achievement and shrinks achievement gaps."

The formative assessment approach organizes classroom work around teachers and students (and often parents) understanding how current work compares to important learning standards and planning next learning steps based on that information.  The research behind that approach shows that, in addition to providing the largest gains for the students who struggle most.

In what I've call the "sunlit vision" of that kind of classroom, that strong, shared approach allows a virtuous cycle of rising results, in which students, teachers, and parents, see results that build confidence, develop confidence that promotes further results, and are able to generate impressively higher levels of achievement.

In situations where the ugliest gaps persist and deepen, I suspect an alternative, far less healthy cycle is at work.  In that version, teachers doubt that students can succeed, students and parents doubt that teachers intend to help students succeed, everyone can smell everyone else's despair, and every round of student work becomes further evidence that there's little point in hoping and aiming any higher.  

I've heard too many educators and citizens say too easily that "some kids" or "our kids" or "you know, those kids" won't be able to meet any higher standards than their current grim level of achievement.  Often, the kids in question are from minority backgrounds,  but the same phrases are applied to children from low-income homes, children with disabilities, and even children who live in "urban" settings.  Often, the people saying those things would be terribly upset to hear their words described as prejudiced, hurtful, and ignorant--but they really believe there is no basis to believe anything better is possible, and they really are mistaken in that belief.

Formative assessment, understood as a rich classroom process, is the practice I think has the best chance of breaking that cycle, stopping that talk, and sustaining the work needed to deliver on each and every child's birthright to learn and grow into adult success.

Sunday, November 14, 2010

Formative assessment's basis in research

As "formative assessment" becomes an increasingly central concept in Kentucky education, we should be asking where to find the research on the strategy.  Here are some of the main sources that could help anyone looking for either basic evidence that the approach is effective or for a nuanced understanding of which approach delivers those important results.

Margaret Heritage's new report for the Council of Chief State School Officers provides a potent summary of the research on formative assessment, beginning with a research synthesis by Paul Black and Dylan Wiliam:
From their review, Black and Wiliam (1998b) proposed that effective formative assessment involves
• teachers making adjustments to teaching and learning in response to assessment evidence;
• students receiving feedback about their learning with advice on what they can do to improve; and
• students' participation in the process through self-assessment.
They concluded that the student learning gains triggered by formative assessment were amongst the largest ever reported for educational interventions with the largest gains being realized by low achievers (1998b). This was, and remains, a powerful argument for formative assessment.
Later in the paper, Heritage summarizes research before and after that Black and Wiliam piece on the central role of usable feedback in accelerating student learning, giving further support to her argument that the classroom process is what allows formative assessment to make a difference in student achievement, including John Hattie and Helen Timperley's 2007 review of the research literature on the crucial role of feedback in students' learning process.

Rick Stiggins' Balanced Assessment Manifesto draws from the same body of research:
When assessment for learning practices like these play out as a matter of routine in classrooms, as mentioned previously, evidence gathered from dozens of studies conducted around the world consistently reveals a half to a full standard deviation gain in student achievement attributable to the careful management of the classroom assessment process, with the largest gains accruing for struggling learners. (Black and Wiliam, 1998; Hattie and Timperley, 2007).
For those who want the short version, there are two main takeaway points. First, there is indeed serious research behind the formative approach.  Second, that approach is rightly understood as supporting formative assessment understood as "a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning."  (Thanks to Gene Wilhoit's "Foreword" to the Heritage paper for that especially succinct definition.)

For those who want to go a step deeper, into the original articles, the relevant citations are:
  • Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5, 7-73
  • Black, P. J., & Wiliam, D. (1998). Inside the Black Box: Raising standards through classroom assessment. Phi Delta Kappan, 80, 139-48.
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.

Lexile and Quantile data coming our way

Kentucky parents, teachers, and students will soon receive new and helpful insight into students' reading and math skills.  According to the Department of Education press release, the new data will feature:
  • Lexile measures for students' reading levels, which can be matched to Lexile measures of the difficulty of different texts, allowing both home and school reading activities focused on books that keep a child moving from current skill to the next level.
  • Quantile measures for math skills that use "a scientific approach to measurement that locates a student’s ability to think mathematically and solve problems in an orderly classification of math skills, concepts and applications." Like the Lexile information, the Quantile data will make it easier to provide tasks that keep each student moving upward in math work.
Lexile measurements will also help with our push toward college and career readiness.  One of the key ideas behind the Common Core Standards is that high school reading assignments, even in grade 12, have used much easier texts than students will face in college and in skilled jobs.  MetaMetrics  has revised its recommended reading range for each grade to ensure that students finish high school prepared for the real demand they'll face later on:
Having Lexile data available for each child will make it much easier to check progress against these goals and plan follow-up steps.  (MetaMetrics is the company that provides Lexile data, and the chart above is from Appendix A to the Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects. Thanks to Robyn Oatley of ReadyKentucky for alerting me to the chart.)

Saturday, November 13, 2010

A KIPP charter school struggling to deliver

The KIPP (Knowledge is Power Program) is one of the most successful charter models nationally, making reports of trouble in its Indianapolis outpost worth attention.

The Indianapolis Business Journal account describes a program that may have embraced the KIPP vision and then missed other key elements of school operations.  The school has struggled with financial requirements, from misusing Title I funds to  poor record keeping and an imbalanced budget.  Staffing stability has also been a major problem, with five school leaders in seven years and a 55 percent staff turnover rate last year.  In past years, student performance also fell short of expectations.

The Indianapolis school may have turned a corner in the last year or two, with yet another leadership change, new financial guidance, and test scores that rose impressively last year. This year, the city's mayor will decide whether that's enough to justify another seven year charter.

The key truth under this story could be that creating a great charter school is a huge undertaking, requiring skill at recruiting staff, supporting staff, recruiting students, leading instruction, managing budgets, tracking paperwork and many other aspects of a complex endeavor.  Even the KIPP network can't guarantee that every branch it opens can combine all those skills well.

Actually, I think the key truth is that creating any great school is a huge undertaking.  

Charter legislation may make the effort slightly easier, but the main hard work of creating excellence is the same no matter who organizes, authorizes, and owns the school in question.

Thursday, November 11, 2010

Getting formative assessment right

Yesterday's post shared Margaret Heritage's report, Formative Assessment and Next-Generation Assessment Systems: Are We Losing an Opportunity? She argues that the formative assessment that produces greater student learning is a process embedded in ongoing instruction, and that the testing instruments being proposed by the new multi-state assessment consortia will yield much weaker results.

Today, I want to link that argument to three running PrichBlog themes.

THE SUNLIT VISION OF CHANGED CLASSROOMS
In Rick Stiggins' Balanced Assessment Manifesto, there's a powerful concept of students, teachers, and parents all seeing a clear path the student can climb to success on important learning standards.  That understanding creates a virtuous spiral in which students master one step, gain confidence, reach for another step, and cycle upward with growing knowledge, skill, and certainty that they can succeed.

When Margaret Heritage argues for a formative assessment process, I think she's arguing for the approach that can create that positive growth.

THE LEVERAGE OF PROFESSIONAL LEARNING COMMUNITIES
The professional learning community approach focuses on teachers working together to analyze student work in relation to standards and to figure out ways to keep each student moving forward.  The PLC environment is central to what works in raising teaching quality and developing consistent strong instruction for all students.

Formative assessment as a process also looks to me like the approach that will most help PLCs develop, flourish, and change student results.

THE LITERACY AND MATHEMATICS STRATEGIES 
NOW LAUNCHING IN KENTUCKY
Over the last year, Kentucky teachers have been developing capacity to use new mathematics resources, and another group of Kentucky educators are now exploring an innovative approach to equipping students to handle the complex texts they'll need for college and careers.  Both efforts are being supported by the Bill & Melinda Gates Foundation through grants to the Prichard Committee, and I'm honored to have a role in coordinating and supporting that work.

The math effort uses formative assessment lessons designed to help reorient class work around deeper, richer, more engaged understanding of mathematical concepts and practices.  The literacy work models teaching tasks and instructional strategies for use across science, history, literature, and other classes, always focused on ensuring that students climb steadily toward higher levels of skill and confidence.

Both versions are explicitly rooted in the research Heritage cites, and both offer strong examples of the kind of formative assessment process she advocates.

Seeing those three connections, I think it's going to be important to figure out whether the consortia really are working toward the effective version of formative assessement. If not, it's going to be important to discuss whether that can be changed, and especially important to build the formative approaches that really work into the ways we teach and learn here in Kentucky.

Wednesday, November 10, 2010

Are the testing consortia getting formative assessment wrong?

Effective formative assessment gives students and teachers feedback they can use immediately to steer further learning.  The key is providing descriptive evidence right in the middle of the classroom learning process and putting it right to work.

Are the new multi-state consortia working on Common Core State Standards Assessments building that kind of formative capacity? Maybe not.

Margaret Heritage argues that both consortia are instead proposing much more conventional testing that will yield much less important achievement results. Dr. Heritage is Assistant Director for Professional Development at the National Center for Research on Evaluation, Standards and Student Testing at UCLA.  In a new report to the Council of Chief State School Officers she argues that:
despite the pioneering efforts of CCSSO and other organizations in the U.S., we already risk losing the promise that formative assessment holds for teaching and learning. The core problem lies in the false, but nonetheless widespread, assumption that formative assessment is a particular kind of measurement instrument, rather than a process that is fundamental and indigenous to the practice of teaching and learning. This distinction is critical, not only for understanding how formative assessment functions, but also for realizing its promise for our students and our society.
The report comes with detailed research citations explaining why and how the formative assessment process can significantly raise student results, and raises an alarm about whether the two groups now working on multi-state assessments are focusing on testing instruments that cannot deliver that kind of impact.

Check out EdWeek's Curriculum Matters for a further summary or read the full argument by downloading Formative Assessment and Next-Generation Assessment Systems: Are We Losing an Opportunity?

For background on the assessment consortia, start with these earlier PrichBlog posts on Smarter/Balanced and PARCC.

Tuesday, November 9, 2010

Correcting a very bad headline

Earlier today, I posted the list of districts that have missed AYP for eight years running too quickly and so with a wrong headline that read "weakest districts."

Here's why I was wrong.  In the table below, the first two columns are the chart I shared in the earlier post, but I've added how the districts rank when sorted by that "combined percent proficient or distinguished in reading and math."

Covington and Jefferson are weak overall, in the bottom 30 of Kentucky's 174 districts when sorted by the combined reading and math statistic.

Campbell and Fayette, though, are in the top 30 overall, and thus strong in a pretty important sense, and Grayson, Bourbon, Simpson, and Bullitt are in the top half by the same statistic.

The sense in which all these districts are weak is that they've missed at least one NCLB goal in each of the last eight years.  It's possible, and I know it's possible, and the numbers above show that it's possible, to fall short for one group while delivering at relatively high levels for most students.

I've added an update note to that post, and I'll try to do more careful work in the future.

Districts slated for NCLB intervention [UPDATED]

UPDATE: I initially posted this information under the heading "weakest districts," but after a closer look, that was a bad choice.  My reasons for changing the headline are explained in my post here.

Thirteen Kentucky districts have fallen short of the adequate yearly progress required by the federal No Child Left Behind legislation for eight or more years.  Here's that list, sorted by each district's combined 2010 proficiency level in reading and mathematics:
According to this morning's press release from the Kentucky Department of Education, the weakest five (highlighted in pale orange above) will "receive district-level leadership assessments and targeted assistance from KDE and will work in partnership with Educational Recovery Directors and other KDE staff to develop and implement corrective action plans."

For the other eight, KDE will provide technical assistance as they "develop their corrective action plans and deferred programmatic funds budgets" and "submit quarterly progress reports to KDE."

Monday, November 8, 2010

2010 persistently-low achieving schools

Ten additional Kentucky schools will undergo a major leadership review and then implement a major school change process to raise scores.

  • East Carter County High School
  • Christian County High School
  • Greenup County High School
  • Martin County's Sheldon Clark High School
  • And six Jefferson County High Schools: Doss, Fairdale, Iroquois, Waggener, Southern, and Seneca
That news is from a Department of Education press release issued this morning.  

Schools identified as persistently low achieving have fallen short of adequate yearly progress for multiple years and then have the weakest scores this year.  

(More exactly, the list includes the five weakest schools that receive Title I money based on high enrollments of low-income students, and the five weakest schools that do not receive that funding.  On the list above, Seneca High and the four schools outside Jefferson County are in the non-Title I group, while the other five Jefferson schools compose the Title I set.)

Friday, November 5, 2010

More college students meet readiness goals


Kentucky higher education is seeing a clear decline in the percent of entering students who come with  ACT scores that could place them in developmental, rather than credit-bearing courses.   The chart above reflects recent Kentucky high school graduates (from public and private schools alike) entering Kentucky colleges and universities (both public and non-profit) in recent years.  The Council for Postsecondary included this news in a great set of high school feedback reports released today.

As an added plus, that trend happened in a period when the number of students going on to college was growing steadily: the students who met the readiness standards were a growing share of a growing group.

As an important caution, the 2010 entering class will face higher readiness standards at our public institutions.  The chart above reflects students who fell short of an 18 score on the English, mathematics, and reading portions of the ACT.  The new standards require and 18 in English, a 19 in mathematics, and a 20 in reading.

The same CPE website page now offers high school feedback reports for each school, district, and region of the state, and readers should definitely check these fresh results.

Fall Perspectives

The Prichard Committee's newest Perspectives newsletter is available now, featuring these headlines:

  • KY Moving Toward Top 20 in Many Areas
  • 'ReadyKentucky' Promotes Standards
  • Words & Influence: Robert F. Sexton, 1942-2010
  • Groups Publish Transition Index Data
  • Gates Foundation Hears Prichard Lessons
  • CIPL Institutes Enlist 57 to Start 13th Year
  • Institutes In Other States Building Traditions
  • Districts Launch Gates-Backed Literacy Work
  • Two Prichard Members Named to State Board
  • Heine Named Prichard's Interim Director
  • TEK Task Force Gathers Input at Forums
  • Business Leaders Joining Council Focused on Pre-K
  • How to Donate to the Robert F. Sexton Legacy Fund

Thursday, November 4, 2010

Quick guesses about election impacts

Here are my rapid thoughts on how the election results will affect Kentucky education.

The overall federal push to raise achievement as measured by standards-based assessments and make major changes in the very weakest schools will be sustained, but the NCLB rules will be replaced by somewhat more workable expectations.  The pressure to revise those rules is growing rapidly as the 2014 deadline approaches, and it will be strong enough to break the deadlock of the last several years.  In Kentucky and other states, the adjustments will be greeted as a constructive step as we move forward on developing our own new assessment and accountability methods.

Additional federal incentives to adopt Common Core Standards are less likely with added Republican power in Congress, and states that decide to slow down implementation may be under less pressure to move ahead.  The federal push for Common Core has been seen by a number of the winning candidates as a federal intrusion on local control.   In Kentucky, we've made our commitment, and we're using Common Core to implement Senate Bill 1, which originated under Republican sponsorship.  That change won't alter our course.  Enough other states are on board that the main momentum will continue, and opportunities for collaboration will grow.


Federal incentives to allow and expand charter schools will probably continue with the changed federal balance of power, because that strategy has strong Republican backers.  In Kentucky, where no charter bill has ever been reported out of committee, where no group of educators or parents have announced that they are working on a concrete concept for a first such school, and where no donor group has offered to provide start-up funds to get even one charter off the ground, the charter prospects will remain weak.


Federal funding to bridge another year of slow economic growth will not happen.  The 2009 stimulus bill and the 2010 EduJobs bill prevented devastating education cuts, but there won't be another installment like that.  In Kentucky, next school year will be the toughest financially in a long time, with boosted federal support running out, state SEEK funding slated for only tiny growth, and local districts very wary of voting for any additional revenue.  Used to low funding, we may not see as much disruption as some other states, but there is still significant pain ahead.  


Apart from federal initiatives, Kentucky will stay the course on implementing higher standards, supported by stronger assessments and more focused accountability rules, while struggling to provide state-level assistance for schools and districts to implement the new standards and take other steps to equip teachers with stronger skills to support students. The financial limitations on that state support will come in part from the economy and in part from state political leadership unconvinced of the need for further investment.  The election results have changed neither the strong parts of our statewide vision nor the weak parts of our commitment to make that vision into reality.

College completion puzzle (southern states edition)

Does Alabama have 14 percent or 32 percent college completion among young adults?  The State of the South 2010 report offers both numbers, just four pages apart, each time reflecting both associate's and bachelor's degrees.   For Kentucky, the choice is 18 percent or 34 percent, and the numbers for all states are shown below:

The pipeline column shows data presented using the familiar method, pioneered by Tom Mortenson, that begins with "Out of 100 ninth graders" and works through who completes high school, goes directly to college, and finishes college within 150 percent of expected time.

The adults 25 to 34 column shows data taken from the Census Bureau, and the final column shows the difference that I think genuinely deserves to be puzzled over.

Since I've been puzzling over the mismatched numbers for several years, I'll share the best clues I've found.  I think the pipeline method misses key issues in its first and last steps:

  • At the beginning, it uses the state's total reported ninth grade enrollment, including those repeating the grade. A student who is held back is counted as enrolled in two different years, but can only be counted as graduating from high school once.
  • At the end, it uses each state's reported college completion rate, which is the percent of full-time students who graduate from the school where they first enrolled.   That means that students who transfer at any point in their undergraduate careers are not counted as graduating, no matter how quickly they actually complete their degrees. 
Without saying that either column offers results I like, I think the difference matters: two respected sources produce quite different evidence about educational outcomes for young adults across the South.