Kudos to the poster above for noting the restriction of range issue. It wouldn’t occur to even one out of a thousand casual readers of these studies (who are really the target). The College Board in their research series typically adjust for range restriction, but that is not the case with the “fair test” types, whose goal is to eliminate testing because it reveals differences in intelligence.
Another huge methodological flaw that is unnoticed - or only casually mentioned - is collinearity in the data. Simply put, a study like the Ithaca College paper looks at admitted students, each of which presented a HSGPA and SAT score. Presumably, if someone presented a 4.0 GPA but only a 900 SAT he was rejected. Similarly, someone presenting a 1.5 HSGPA was rejected despite a 1500 SAT. These are extremes, but more generally there is going to be some correlation between HSGPA and SAT, because each metric has an input factor of intelligence, which is not measured explicitly but can only be imperfectly inferred.
Therefore, it is not very surprising that when looking at the data, disaggregating each primary component of the admissions decision - in particular GPA, SAT and some amorphous “rigor” score that is assigned by some college admissions office clerk - would not show huge differences in predictive ability. First, these measures are all going to be correlated in the abstract with the omitted causative variable being intelligence. And second, very anomalous idiosyncratic data points (high SAT, low GPA or vice versa) will have already been filtered out by the admissions process.
In order to truly test the predictive ability of a test within a particular college environment, a school would need to admit one cohort based on all the “non-testing” factors and a separate cohort simply on the basis of a test score, and then compare results - preferably first or at latest second year college GPA. (By third year, the students will have self-sorted, with the less intelligent going into the easier majors and vice versa, on average of course.)
This was not done in the Ithaca study, but by now they should have some preliminary data from their first “no test” admissions cohorts, so perhaps we will see something like this in the future, even though presumably the “testing” cohort will be noisy with the other measures. But perhaps not. The study was quite explicit that the reason for going test-optional was to increase applications and enrollment, especially from minorities (who test poorly):
“In 2009, the College decided to strategically position itself for breaking away from the predicted rapid decline of the high school graduate population in Northeast. The strategies laid out include…propos[ing] a test-optional admission policy in order to increase applications not only from its primary markets, but also from more racially diverse communities.” (p.4)
In other words, this is basically a marketing strategy, and kudos to Ithaca for being up front about it. Highly selective institutions like Bowdoin are likely using this approach to increase admissibility of low scoring groups. So, if you happen to be Asian, don’t think of not providing your test scores!
For my part, I would bet the farm that test scores will be the best single predictor of success, because they will be most correlated with intelligence. Of course, smarts alone do not predict success - there are many other factors that go into it. One should view smarts as a necessary but not sufficient condition. Perhaps a hybrid system would be most appropriate in college admissions. A minimum score on standardized testing would be a “first screen” that gets the applicant to the second round, during which HSGPA and the various “soft” measures like leadership, perseverance, special talents, etc. are weighed in order to reach a final decision.