SSAT Superscore percentile weighting

Curious if anyone has any insight into overall percentage calculations when super scoring is involved?

If the “high” scores come from three different tests is their a way to get a sense of what the overall percentage would be?

Honestly I have no personal experience with this but I have read many threads on here as well as outside that have noted that superscoring is accepted by most schools, but they are still able to see your low scores. So if you’re worried about AO’s not being able to “unsee” these scores then superscoring may not be the way to go

I actually used to think that super scoring meant that they just took the overall percentage rather than using the different components (assuming that there was weakness)as a way to penalize an applicant. However, as I understand it, if you take it three times, they use your top scores to give you the best cumulative score possible. But they do see all scores…

Yes, they will certainly see all scores. What I’m curious about, and perhaps no one really knows, is what is the mathematical algorithms that is used to determine the overall percentage.

If you get V80, R82, and Q89 what would the overall percentage be? It is not 80 + 82 + 89 / 3. There is some sort of mathematical weighting that takes place.

Definitely some algorithm that uses numbers of missed questions across populations, gender, age etc. Often test questions are included that are not scored. I have been through this twice and one child took the ISEE and the SSAT. Sometimes the numbers seem very wonky.

I have four sets of SSAT scores and one set of ISEE scores. The ISEE scores are not really helpful but doing quite a bit of work in excel there seems to be no clear algorithm to get from three category percentages to an overall percentage.

I theorize that the three sections are not equally weighted and that some questions are worth more. Not sure. Reading comprehension is more subjective and nuanced whereas math is more black and white. Not sure if those differences are factored in

They are not. I tried to get the weighting between two tests using excel and couldn’t figure it out. It must include something based on questions from the bank and who’s taking that particular session. I couldn’t figure a way to get an actual superscore.

Another factor may be numbers of kids taking it: Upper test is just 8 and 9, whereas lower us 6,7, and 8…right?

I think that each test is measured against a large population from a least two or three years of that particular test administration. Individual scores can be “scaled” based on the statistical difficulty of how correctly questions answered over a large population. But my best guess is that math would only impact an individual score and not how the overall average is obtained.

If I understand this thread, the main question is how a school arrives at an overall percentage score when it compiles a “superscore.” I’m not sure that they do. What they most likely use is the total of the three highest scaled stores.

In fact, there is no “averaging” per se of the three component scores. The SSATB simply adds the three scaled scores together (e.g., a 700 on each component would add up to a 2100 total scaled score) and attaches a percentage to the total based on the relative performance of others who have taken that particular test for the first time.

In the case of superscoring, selecting component scaled scores from multiple tests would make it impossible to obtain an overall percentage. In other words, they can give you an overall percentage for the November test as a whole, but they can’t give you an overall percentage score using the November quantitative, the December reading, and the January verbal.

Consider this: a candidate could score a 700 on the verbal in January and receive a percentage score on that section of 90. But the same 700 scaled score in November may have yielded only a percentage score of 88. Similarly, a total scaled score of 2100 may yield a total percentage of 80 in November, but only 79 in December. Thus, the percentages are meaningful only for that given test and cannot be combined across tests through superscoring.

Actually, I think the significance of superscoring is exaggerated. After receiving hundreds of phone calls over the years from parents whose children have achieved widely disparate component scores from test to test, schools, I suspect, use superscoring as merely one gimmick to reduce anxiety surrounding scores and to reassure candidates. That is, it allows candidates to gain some assurance that by submitting multiple scores they will be putting their best face forward–which is really all one can reasonably ask for from such an arcane process.

In fact, as the admissions process is largely opaque to outsiders, and certainly differs from school to school, it would be hard for any of us to generalize on how most schools are using this test. The best we can do is to ask individual schools how they use the test–and then try to divine the true meaning behind the vague explanation that is offered.

For those who are concerned about less than stellar scores, I will share this for what it’s worth. Five years or so ago, an admissions officer at Esteemed School X, in an unguarded moment, told me that a candidate would need a minimum score of XX% in order to be admitted–and that score was about 20 percentage points lower than the school’s published average.

So, “middling” scores (however you define that) are not necessarily a cause for despair.

Mathematically, it would seem to me that you cannot mix percentile scores from one test date to another, since the reference group would change? Perhaps they superscore the actual scores and not the percentile rankings.

Based on many scores reported here, it seems it is not that unusual to have a large percentile swing in the same subject from one test to another, and that was also the experience of a child’s son (who got a 51 on one verbal test, and a 77 on a verbal test weeks later). My own DC had a smaller but still meaning difference (10 percentile points a month or so apart). As a third party, I would conclude that the tests have some real weaknesses. Maybe the sample size is too small, IDK. I wonder if you see swings like that in the SAT?

Anyway, having seen those kinds of swings, I can’t imagine that admissions officers really believe that, say a, 90 is different than a 94. They must consider a range.

While its obvious to see the difficulty in getting a “real” score when super scoring, I was also referencing the algorithm determining a total percentile score from the three components on a single test. That being said the SSAT is actually a tougher comparative group than the SAT. Nearly everyone applying to college takes the SAT. The SSAT pool is much narrower. Same with the ISEE. I dont think it needs explaining to this enlightened bunch. So I believe that is why schools, even HADES level, can and do take lower scores depending on the applicant. Nonetheless, Andover/Exeter etc have very high average scores.

Yes, of course the pool of SSAT students it is much narrower, but I wonder if that makes it too small to get consistent rankings from test to test and that is why one sees such large swings.

I don’t think it’s too small a sample size at all. They have many years to draw on. I know they do comparisons year to year and in various bands of time to calibrate and compare. Further, guessing or skipping questions can effect individual dramatic swings. As one can see the whole report, the admissions people may look at the details of an unusual score fluctuation.