THE NAVAL ACADEMY AND THE SAT
January 1, 1970
As a former reporter who often jumped from story to story, I sometimes wonder why I keep writing about testing and merit in America. Why don't I get off the education stuff and write, say, about the pharmaceutical industry?Then something else comes along, yet again, to convince me that the struggle for justice never ends when it comes to the gross misuses of mental testing in our society.
This time, we learn in a recent Sunday New York Times piece (based on documents obtained by FairTest) that the U.S. Naval Academy told a promising student, Daniel Wurangian who lives near Los Angeles, that his modest SAT score wasn't good enough to even allow him to submit an application to the academy.
No matter that he had earned a 3.64 GPA and had spent four years as a cadet in the Naval Junior Reserves Officer Training Corps, serving as the school's highest ranking cadet.
That's not all. In a recent letter, Daniel told me that his his congressman had already nominated him for admission to the Air Force Academy.
But, owing to the intense competition for admission to the Naval Academy, officials opted to make the sorting process easier for them by setting a minimum cut-off score for one to merit consideration. The policy might be bureaucratically convenient to the Navy but it's an outrageous offense to professionally acceptable testing practice.
What's more, the academy admitted to the Times's reporter that it had not even done validity studies to prove that the SAT is the "effective predictor" of success at the school that officials claim it is.
But this lameness is nothing new, really. Time and again, we find cases of institutions simply assuming that test scores are good predictors of future performance, without doing the hard work of demonstrating the validity of the assumption.
Examples abound. The NCAA sets minimum SAT scores for athletic eligibility. The state of Michigan restricts "merit" scholarships to students who score sufficiently high on that state's Michigan Education Assessment Program (MEAP) high school exit exam.
When institutions set minimum scores for admission or scholarships, they are also suggesting that actual performance on endeavors of substance, such as Mr. Wurangian's classroom performance or his accomplishments as a junior cadet, don't matter, that they aren't equally valid evidence of achievement as a few hours spent on a pencil and paper test.
In fact, just the opposite is the case, and the research literature is quite clear on this point. Accomplishments on endeavors that are very similar to the desired traits one wishes to predict -- high school classroom performance as an indicator of college freshman performance, for instance -- are virtually always better predictors than standardized test scores.
When one adds to the mix the harsh effects of test score cut-offs on students of color or those from modest economic backgrounds, the more non-sensical and unjust such public policies become.
But at the very least, institutions like the Naval Academy are obligated to run the numbers and prove to themselves and the public that their use of test scores as a gatekeeper to the school makes sense and that it's an educationally effective tool. In other words, they must prove that they're using test scores as substantially more that a mechanistic sorting device of illusory validity.
So that's why I keep writing about this stuff.