In 2005, the College Board unveiled the most dramatic changes in years in the SAT. The dreaded analogies were removed. Mathematics questions were updated. A writing section was added, resulting in the test getting longer. The moves came at a time that a growing number of critics were questioning whether the SAT really added enough to admissions officers’ knowledge to justify the stress, time and money of millions of students. Other critics raised issues of fairness, noting the gaps among gender and racial groups — and the ability of wealthy students to coach themselves to better scores.
The new and improved SAT was supposed to respond to many of those critics. On Tuesday, the College Board released “validity studies” in which, for the first time, results on the new SAT were correlated with first-year grades earned by students who enrolled at four-year colleges. (That’s the key measure for judging the SAT because the College Board says it is designed to predict first-year grades, not long-term success in college.)
College Board leaders in a telephone press conference hailed the results as great news. Gaston Caperton, president of the board, said that the studies contained “very important and positive news” for colleges, in particular that the writing test’s addition had worked and had brought much more attention to writing instruction.
But the reports themselves suggested that the SAT’s strengths and weaknesses were not much different from before the big changes. “The results show that the changes made to the SAT did not substantially change how well the test predicts first year college performance,” said one report, which examined overall reliability of the SAT. This study also found — and this is unchanged from studies of the old SAT — that the single best way to predict a high school student’s performance in the freshman year of college is through high school grades, not the SAT. (While that data point is clear in the report, College Board leaders stressed that by combining high school grades with the SAT, still more predictive value was found, although not different from that of the old SAT).
The other report focused on “differential validity,” meaning the question of whether the SAT is equally accurate in predicting the college success of different kinds of students. Here, many defenders of the SAT had hoped that the addition of the writing test might have made a difference, especially in the trend in which the SAT has tended to underpredict the abilities of females who take the test and to overpredict the skills of men. But here, too, the new SAT appears to have the same problems as the old SAT.
The College Board’s report said: “The findings demonstrate that there are similar patterns of differential validity and prediction by gender, race/ethnicity, and best language subgroups on the revised SAT compared with previous research on older versions of the test.”
In terms of race, the report found the exact same patterns as it did in studying the earlier version of the SAT. Scores of black, Latino and American Indian students overpredict first-year performance in college. (That may sound surprising, because many advocates for minority students say that many who excel in college do so despite low SAT scores, but those comparisons tend to focus on overall college records, not freshman year.) White and Asian students tend to be accurately or slightly underpredicted.
On another equity issue — whether the tests are coachable, giving an edge to those who can pay for tutors and classes — the College Board has already admitted that the new writing test is in fact coachable.
The studies were based on 150,000 students’ records, reviewed at a wide range of institutions.
College Board officials defended the results as a success and said that they were not alarmed by the gaps in predictive validity. Laurence Bunin, senior vice president of the board, said that “the SAT is the most well designed and researched test in the world,” and noted that questions are reviewed by multicultural groups of educators. “We know the questions on the test are fair,” he said.
Wayne Camara, vice president of research and development, noted that there are predictive gaps — in many cases larger than those from the SAT — on using high school grades as well. Camara and others said that the most encouraging result was that the new writing test had predictive validity across ethnic and racial groups — at higher levels than the rest of the SAT.
Given the importance of writing, this suggests a real improvement in the test, he said.
Long-time critics were not impressed. Robert Schaeffer, public education director of the National Center for Fair and Open Testing, known as FairTest said that the College Board’s slogan should be: “Meet the new test, same as the old test — only longer and more expensive.”
The emphasis of the College Board on writing is in part because many colleges have held off on requiring the SAT writing test. According to the board, 44 percent of colleges that are “moderately selective” require or use the writing test. (In other words, that’s the percentage that either require or “recommend” a writing test.) The ACT, which also has a new writing test, has had similar results, both in terms of percentage of colleges requiring it and in only seeing a small additional predictive value in the test because of the additional section.
It is unclear whether the results released Tuesday will prompt a groundswell from writing instructors to place more emphasis on the SAT. While some writing instructors have praised the College Board for adding writing, saying that the move sent a strong message, many think that the test encourages the worst kind of writing.
Les Perelman, director of the Writing Across the Curriculum program at the Massachusetts Institute of Technology, thinks the writing test is so bad that he coaches students on how to write abysmal essays, while including words that the College Board likes (“plethora” is key) and to end up with great scores. (The story of one of his successful efforts is here.)
Perelman said that it’s absolutely no surprise that students who do well on the SAT writing test do well in college. The College Board favors the traditional “five paragraph essay” format taught to high school freshmen, and those who are going to succeed in college have generally mastered the format and picked up the various tricks that earn good scores on the essay. (One of Perelman’s students, to show how the scoring favors quotations from famous people, accurate or not, took the test using various quotes that happened to be visible in the testing room, and attributed all of them to Lee Iacocca — and she earned great scores.)
“The writing test is teaching students a lot of bad habits,” said Perelman. “It’s real predictive value, in terms of writing, is nil.”