There is a range for the scaled score. The answer is still no: a 99 is not meaningfully more desirable than a 96. In fcat I would posit that a 99 with a less well rounded profile is at a disadvantage to a superb 90% with great EC’s, grades and a hook etc.
@Center I mean a 2400 vs a 2320
I’ll say it again. I don’t think that a 2320, which is usually 99% (but just barely) and means that the tester got approximately 12-14 questions wrong on the test in total, is meaningfully different from a 2400 achieved by a tester with, say 7-10 wrong answers in total.
However, and maybe I am wrong about this because I haven’t spoken directly with anyone in a decisionmaking role on this specific topic, I do think a 2400 with 0 wrong answers on the entire test would carry some additional weight over “just” a 2400. If you look at the distribution of SAT scores, for instance, at least since the recentering of the scaling system in the mid 1990s, you will notice there are 3-4 times as many “perfect” 2400s (now, 1600s) as 2380s or 2390s, something you wouldn’t expect in a normal distribution unless the test was not difficult enough to capture the full range of abilities out there. Similarly, I would think that a zero wrong 2400 on the SSAT would support the idea that the tester is significantly beyond the ability of the test to make distinctions - literally “off the scale”.
No one is saying that SSATs are all important, of course. But they must have some value, or else they wouldn’t be required. I’m just saying that while there may be little difference to an AO among a 96% with 16 wrong, a 99% with 12 wrong and a 2400 (99% too) with 10 wrong, I am guessing that presenting a 2400 with 0 wrong answers is different. Good preparation and a resonably bright kid will get you to 99% and maybe even a 2400. But 0 wrong puts you off the charts imo. I really don’t know what to think about a 2400 with 2, 3 or 4 wrong in total in comparison to a 2400 with 7-10 wrong.
Last, the score reports do report numbers of questions answered correctly, incorrectly or omitted. There must be some reason to report these, again showing that there is some information in the raw scores that is not fully communicated in the scaled scores.
@Personof2017, I believe that scaled score of 2400 vs 2320, while both being 99% (or 98% for that matter) would not make a difference in prep school admission.
@SatchelSF the percentiles shift from date to date and as they recalibrate.
@Center - Oh sure I understand that the percentile rank for any given raw score will vary from one test administration to another. But presumably the tests are constructed so that you don’t get wild variance. For instance, 3 wrong on the math sections on the October test would still result in an 800 (and 99 percentile) while 3 wrong on the November test might result in a 780 and 98 percentile because as it turned out the test was easier for the population taking it (I’m just picking numbers here) This is all just normal variance for a test that is reasonably well constructed for the intended test taking group. But if 3 wrong resulted in a 720 and a low 90s percentile the test makers didn’t do their jobs correctly!
I’ve only been talking about the OP’s original question about the weight of a “perfect” 2400 versus other 99% scores and I think I agree with most posters that there is little or no difference. I’m just positing that for perfect raw scores - that is, no questions omitted or incorrect - that truly “perfect” 2400 might vary a little more weight, as it would support the idea that the test taker is of the scale to the right side of the distribution.
MODERATOR’S NOTE:
Closing thread; you cannot expect that they answers will change just by rephrasing the question. The question has been as accurately answered as possible considering that anyone here is not an AO