SAT concordance table - compare old and new SAT scores

Caltech has the highest SAT averages of any school in the country. Section scores have ranged from 740-800 for the middle 50% in reading and writing, so at least a quarter of the enrolling students have gotten an 800.

So far, I’ve only seen 3 people here who self-reported an 800 on EBRW via the search function. Stanford’s CR+W/2 75% was 785 last year; Princeton’s was 790. If both are putting their 75% as a 760 this year, it doesn’t seem there are many 790s/800s to go around for all the competitive schools. For a school at the extreme high end like Caltech, they likely have more scoring 800s in the old test than the new one.

I mentioned this way back in the start of the thread, but since reading and writing are combined, students effectively have to demonstrate top notch proficiency in both to score in the 770+ range. That pool is inherently smaller than those who scored 770+ on CR and writing alone. Elite schools could alternate their acceptances such that the 75% range would be close to 800, as long as 25% of the class scored an 800 per section. Now, getting an 800 as the top 75% range is only possible by admitting students who are good at both sections. Furthermore, the curve is harsher on the upper end. Most old tests had -3 = 800 for reading and -2/11+ essay = 800 for writing. Now, if you miss just one on reading or writing, you’re often brought down immediately to a 39/40 for each and can’t get above a 790.

Sorry to quibble again, but the fact that the new SAT scores of admitted students are lower than expected based on the concordance tables is NOT evidence that the new SAT is harder than the old SAT. For the most part, we are talking about two different populations: students who took the old test and students who take the new test. It is quite possible that skills in the second group are lower on average than skills in the first group. In fact, one of the main reasons that College Board revises the test every 10 years is to deal with the overall decline in skills (especially in verbal skills) by making the test EASIER (especially in the reading section). The only valid way to show that the new test is harder than the old test (or harder than would appear from the concordance table) is to sit down the SAME population and administer both tests to the SAME students. Try giving your son or daughter an old SAT and a new one, and see if he or she scores better on the old test than would be predicted by the concordance. See if he or she can do the sentence completions. This is the only valid measure.

@Plotinus If you go back through this thread (or maybe others on CC), there are some examples of students who took both the ACT and the new SAT. When using the concordance table to convert the ACT to new SAT, the concorded score was almost always higher than the actual new SAT score. The evidence is anecdotal but interesting nonetheless.

@nostalgicwisdom I think you are saying that since the old SAT had the grammar, sentence structure type “English” questions in the Writing section (along with the mandatory essay) and many colleges ignored that section in their statistics (in many cases we saw them quote their old SAT scores as CR+M), that students that took the old SAT that were weaker in English/Grammer topics scored higher under the old. The new SAT includes the English/Grammar questions in the EBRW but excludes the essay (reported separately).

In summary you are saying that CR+M in old SAT <> EBWR in new SAT and the difference is the grammar/English questions are in the new EBRW scores but not in the old CR+M scores. If so- I agree - and good point!

Also, agree that the curve is certainly harsher at the upper end. Even compared to the ACT, which has a much small scale. In most cases, with the ACT Reading or English you can miss at least 2 and still get a 36!

@bucketDad
As far as I know, the ACT organization has refused to accept the CB ACT-New SAT concordances. This is because the ACT-Redesigned SAT concordances were not produced by administering the ACT and the new SAT to the same group of students. Rather, the new SAT-ACT concordances were produced by stitching together the new SAT-old SAT concordances and the old SAT-ACT concordances, although each of these concordances was produced with a different population. Thus the methodology behind the CB RSAT-ACT concordance is invalid, as ACT has asserted. The only way to produce a statistically valid concordance is to give the two tests to the same population. While the ACT-new SAT concordances published by CB are completely bogus from a scientific point of view, I don’t know whether adcoms are using them in the absence of any statistically valid alternative.

The two tests do not necessarily have to be given to the same population, just equivalent populations. That is the basis of almost all scientific studies, where we have an experimental group and a control group that are equivalent, usually by some randomization procedure. In a social science study, where randomization is not feasible, the standard is usually to show that the two groups are equivalent by measuring them on various attributes. It might be interesting to give the two tests to the same group, but we would then have to make sure that half of the students see the old test first and then the new test second, and then vice versa for the other half.

I saw that CB had planned a study in Dec 2015 upon which to base the concordance, but I can’t find any more info about what that consisted of - I’m probably not be looking in the right place. Does anyone know whether CB ever gave the Old and New SATs to the same (and/or equivalent) population? (The often-referenced 2014 study was supposed to be for PSAT10 and PSAT/NMSQT, not the New SAT. The ensuing 2015 PSAT percentile issues do not inspire confidence…)

I am not a statistician, but I looked into the methodology of the ACT-old SAT concordance some time back.
Of course you can administer the two tests to two different populations if the populations are equivalent, but how do we know that they are equivalent? I doubt that test-takers in 2006 (the ones who took the ACT and SAT for the old concordances) are equivalent to test-takers in 2015 or whenever the new SAT - old SAT concordances were done.

Even back in 2006, CB underlined that the concordances were of limited validity. However, at least back then CB claimed that the best concordance came from a group of students who had taken both tests.

https://research.collegeboard.org/programs/sat/data/concordance

Anyone know why CB and ACT have not produced a joint concordance study as in 2006?

I think adcoms are aware that all concordances are of limited validity.

After all, the current concordance table is not based on any population if students. Just hypothetical. It is just a tool to convince the adcom to use the new score and the students to take the new SAT instead of switching to ACT.

I’m confused. Could somebody shed some light on why both of these concordance tables for SAT to ACT released by the College Board are different?

https://secure-media.collegeboard.org/digitalServices/pdf/sat/cSAT-concordance-flyer.pdf

https://collegereadiness.collegeboard.org/pdf/higher-ed-brief-sat-concordance.pdf

The first says that a 1540 equals a 35 ACT and the second on table 7 says that a 1540 equals a 34.

@kimclan1 Your first link, dated 2015, appears to the old table from 2009, concording ACT and the Old SAT. The second link is the May 2016 concordance tables for the New SAT (note that ACT does not agree with this, unlike the old table, but many colleges appear to be using this anyway, particularly when setting levels for automatic scholarships).

It is pretty obvious that CC members dissect the information far more than colleges/adcoms do lol.

@evergreen5 Thanks that makes a lot more sense.

As described in another thread posted by @scoodoo1 , the College Board released 2017 percentiles for the SAT that are higher per score than the original, research-study-based 2016 percentiles. The 2017 percentiles are based on actual test scores from class of 2017. Should this affect the concordance tables?

I think it will affect the concordance tables. In the old days, the concordance tables were based on the students taking both ACT and SAT. The one posted from Collegeboard last year is hypothetical and ACT does not acknowledge it. Now with a real population data, even not from exactly the same population of students taking ACT, the concordance table should be revised. Without the acceptance by ACT, it is not a “concordance” table.

Using the 2017 percentile table released by CB, it is clear the concordance tables are off. For example, I have taken the 50%, 75%, 90%, 95%, 98% and 99% data. Then look at the which SAT score (or range) where this particular %tile FIRST appear on the tables. Using the data from 2011-15, the scores were 1480-90, 1720, 1930, 2050, 2150-60, and 2210-20, respectively. Using the concordance table to convert the old 2400 scale scores to the new 1600 scale scores, I got 1085, 1235, 1370, 1435, 1490, and 1520, respectively. But if I use the 2017 percentile table from CB, for these percentiles, the respective scores were 1055, 1195, 1320, 1390, 1450 and 1480. That represents and overestimation of the new SAT scores using the concordance table, across a great distance over the curve, and fairly uniformly by about 40 points. This may not sound like a lot, but most parents(students) could tell you the importance of that extra 40 points in admissions, scholarships cut off etc. I doubt CB would revise the concordance table with the 2017 actual test data, but I do wish admissions office take notice of this discrepancy and not causing unfair undue harm to students.

Using the 2017 percentile table released by CB, it is clear the concordance table are off. For example, I have taken the 50%, 75%, 90%, 95%, 98% and 99% data. Then look at the which SAT score (or range) where this particular %tile FIRST appear on the tables. Using the data from 2011-15, the scores were 1480-90, 1720, 1930, 2050, 2150-60, and 2210-20, respectively. Using the concordance table to convert the old 2400 scale scores to the new 1600 scale scores, I got 1085, 1235, 1370, 1435, 1490, and 1520, respectively. But if I use the 2017 percentile table from CB, for these percentiles, the respective scores were 1055, 1195, 1320, 1390, 1450 and 1480. That represents and overestimation of the new SAT scores using the concordance table, across a great distance over the curve, and fairly uniformly by about 40 points. This may not sound like a lot, but most parents(students) could tell you the importance of that extra 40 points in admissions, scholarships cut off etc. I doubt CB would revise the concordance table with the 2017 actual test data, but I do wish admissions office take notice of this discrepancy and not causing unfair undue harm to students. The data table I use are here -
collegereadiness.collegeboard
prepscholar

The original concordance tables published were based on the PSAT given in Fall 2015 and on small samplings they did before that. The Fall 2015 PSAT was too easy at the top levels - and the scores were high. They have subsequently tightened up the actual SAT tests given - therefore the concordance originally published was off. They are now publishing new concordance to reflect actual SAT sittings from 2016 and 2017. That is my theory.

That’s what it looks like, except that the scores were higher at the high end on the 2016 PSAT than on the 2015 PSAT… Maybe that was accidental.

Mickey2Dad Good stuff. In addition, the 40 point difference is just comparing the new SAT with the old. However, when you compare the new SAT with the Concordance Table predictions the difference is more like 100 points.