<p>Piper - it isn’t so much that I think standardized tests are everything; in fact, far from it. I’m a mediocre test-taker myself, so I’m in fact slightly biased against them if anything. And my point, which I’ll now clarify, is quite different from saying standardized tests and such provide better indicators of undergraduate performance.</p>
<p>It is, rather, related to the fact that such indicators ARE used by admissions to an extent, and that the quality of these indicators can be improved. Admissions makes the best decision based on the indicators it has available and values. For instance, beyond clearing a 700 on certain portions of the SAT exam, admissions doesn’t seem to care much, at least for MIT, as differences beyond there don’t seem to matter - but up to that point, it matters some. What I would say is not very controversial, is that such measures are not terrific in many ways (which I believe admissions fully recognizes), and that if there were better measures, it would help those who have no choice but to make some decision based on the data given.</p>
<p>If that sounded like gibberish, the simple version is: admissions does make the best decision it can (based on its own values!) from the data given, but the data it has comes from some source, which can be refined to produce better data.</p>
<p>It is a very different argument to start talking about what admissions should and shouldn’t value.</p>
<p>And in case it was confusing, when I suggested some of what may be considered old-school meritocratic probably should be taken more seriously, I didn’t mean it should necessarily be weighted more in the admissions process. Rather, I meant the actual measures being improved results in their contribution being more definitive in what it means, so that the measures themselves become less of a laughing matter.</p>