Another area that the book examines with its database is the relative predictive value of grades and standardized tests like the SAT and ACT on predicting college graduation rates. The overall conclusion of the section is that high school grades have more predictive value than do standardized tests, and that the additional predictive value from tests is quite small (although slightly greater at the most competitive institutions).I've added emphasis for the key point: it's way too easy to overvalue the tests and undervalue the high school work as predictors of who's likely to succeed in college.
The finding could be significant for several reasons. First the authors intentionally go to a new measure of testing validity -- graduation rates -- rather than focusing on the measure used by the College Board for predictive validity of the SAT, which is first-year grades. Bowen said that since the goal should be graduation (and on time graduation), testing should be measured in that way.
While the limited value they find for testing might be seen as an anti-testing stance, the authors are careful not to go there. They say that they don't want to focus on "to test or not to test" but on how testing could or should be used. Generally, the book offers praise for the SAT II (the subject tests) and the Advanced Placement tests, noting that both of these tests are based on what students actually learn in academic areas.
Bowen said that "we're not anti-testers," but that colleges -- especially those that aren't at the most competitive levels -- need to "think about weighting" so that it's clear that "if you have done well in high school, but not on the SAT," you can enroll, he said
While this finding is based on graduation rates, similar conclusions based on early college grades are pretty much old-hat. Indeed, when our newest Supreme Court Justice claimed that affirmative action got her into Princeton, I thought she was likely wrong. Valedictorians are a good bet even if their scores aren't stellar, and the tenacity to get there from a housing project was a further indication of readiness to go the distance.
In Kentucky's push to college readiness, I'm glad SB 1 has us moving to curriculum standards and tests linked to those standards, rather than merely assuming that existing tests measure what we need.
On the surface it makes sense that grades are a good predictor of postsecondary success. Grades are a compilitation of being able to demonstrate knowledge (exams), being responsible and conscientious (showing up for class, doing homework, and turning it in), and being able to share your thoughts (class participation). Those are certainly ingredients of success.
ReplyDeleteHowever, for students with undiagnosed/diagnosed specific learning disabilities, grades are a nightmare of inaccurate information. I have seen these students get As who aren't exposed to challenging work and end up having no grasp of the content; and I have seen these students get Ds for having a grand grasp of the content and never waivering persistance yet receive no credit for work being turned in late, for being disorganized, or because they took a too literal interpretation of the instructions. Deficits inherent to their disability which are not addressed in the classroom.
It would also make sense that grades received in Advanced Placement classes would be a good predictor, except when you see a student earn an A (96%) in an AP class and then get a 1 (lowest score possible) on the AP test. How can that be? For a student with a learning disability it can easliy happen when the school fails to follow through on accomodations in the testing process or students don't know that there are free on-line practice tests available as study tools.
When quality teaching happens, which in my book includes students with disabilities getting the supports and services they need, grades might be a consistent indicator of post secondary success for all. But right now, that's not happening by a long shot.