SAT now (2018) vs. then (1995-2004 and pre-January 1994)

So what I am getting from all this is that after the 1995 re-centering, SAT scores increased by 70 points? And after the 2016 redesign, the scores again increased by another 70 points?

Using this information:

2016 SAT: 1440
2004 SAT:1370
1988 SAT: 1300

So a current SAT score should be subtracted by 140 points to get a 1988 equivalent?

Re: #40

Reply #7 links to the conversion tables between the various versions of the SAT.

The 2016 redesign seems odd. Did the change from back then make the SAT easier by 70 points?

The reason I ask is because there was another thread on College Confidential that purported the 2016 concordance tables were a bit inaccurate at the upper end, and that the concordance tables were overestimating the scores that would be achieved on the New SAT.

@dexysmidnightrunners23 They were inaccurate (as I understand the process, they were not based on a proper concordance study, but simply a tool to tide over the colleges until one could be developed). The College Board and ACT jointly developed and published actual concordance tables in 2018 (and yes, there were some adjustments compared to 2016). The 2016 concordance tables are obsolete. The 2018 tables:
https://www.act.org/content/dam/act/unsecured/documents/ACT-SAT-Concordance-Tables.pdf
https://collegereadiness.collegeboard.org/pdf/guide-2018-act-sat-concordance.pdf

““Proving my point above that pretty much nobody got 700 or above.”

Just seeing this thread - feeling better about my 700/800 in ‘82, which fell behind several of my classmates. Still didn’t get me into my top choices, while they did. Perhaps my HS was more competitive than than I thought.

Commenting on the above - while no penalty for guessing may increase a raw score, final scores are based on a distribution by standard deviation. A 600 is 1SD above the mean, no matter how you score it. Similarly, the obsession/blame on “tough curves” is misplaced.

@evergreen5 With that being the case, how would one find an accurate comparison between the Old SAT (pre-March 2016) and the New SAT (post-March 2016)?

@RichInPitt Wow, a 1500 in 1982! That score is nothing to be ashamed of! Was your Verbal 800? How did you know all of the vocabulary words? Also, by “tough curves” are you referring to the June 2018 SAT?

In my working class high school, “breaking 1000” was considered a good score. Of course no one retook it because you couldn’t really study for it (supposedly). There was some attempt at preparing - via the English teacher explaining how the sections worked and focusing on vocab and analogies.

@suzyQ7 What would a 1000 SAT score before 1995 convert to now?

What’s interesting (and slightly off topic) is that when you look at CTY John Hopkins’ Eligibility calculator, they group all SAT scores from 1999-present as equivalent. Why do you guys think that would be the case?

Link to calculator: https://mycty.jhu.edu/eligibility/eligibility_TS.cfm?_ga=2.36237222.764823994.1560758066-1342709118.1560758066

I took the SAT in the early 90s and I recall studying for it just a little, mostly to familiarize myself with the format. I bought either a Kaplan or Princeton Review book and took those practice tests. The book had some lists of vocabulary in it, too. The book was trying to increase the students’ confidence by arguing that you could figure out the answers by a few simple techniques. I think the test prep book was in its infancy but it seemed pretty disrespectful to the test. It had a fictional simple guy named Joe Blow who would always guess the most obvious answer, which was always wrong. I think the first step was always to identify the Joe Blow answer and eliminate it. They’d also decided that the SAT statistically had more answers that were B and we were supposed to select that if we were able to eliminate at least one choice, but were unable to eliminate any other choice (since there were 5 choices and you paid a 1/4 point penalty for guessing). They implied it was all a big game that could be manipulated with practice and their Unique System.

I think the minor studying helped me. I got a 740 on math and a 730 on English. As a girl who was high scoring on math and took calc junior year (extremely early for that time period), I was mailed free application vouchers from three schools. They were Virginia Tech (I was in-state), Caltech, and MIT. I didn’t apply to any of them. I think Caltech even sent me something after the application deadline and offered to let me apply late. They were clearly trying hard to increase their gender ratio. It’s very weird to think of that now!

I took the SAT in 1987 and scored 660V 720M, which was a very good score at the time. I went to a Seven Sisters school and peers in my very competitive public high school scored similarly or higher (we had 22 National Merit Scholarship semi-finalists in my class). Quite a few scored above 1500 and they all went to Ivy League or little Ivies. I distinctly remember the 1995 redesign got rid of the antonyms (they were killer).

Oh, and my husband who went to an Ivy League school received a lower score on the SAT than me.

The concordances provided can help translate scores - there’s no global number that can be applied to everything. A 500/500 pre-1995 translates to 1170 (+170) today. A 420/380 would be a 1010 (+210). And a 1600 is still a 1600 (+0).

+90 and +70 would be an average mid-range jump.

I don’t think a test being “easier” is actually meaningful. Is a 1020 on the SAT “better” than a 30 on the ACT because it’s 34x “higher”? Is the test “easier”? The numbers all relate to percentiles and that’s what really matters. Assuming the tests are fairly well designed and tested, a 90th percentile student will score in the 90th percentile compared to peers, whether that’s 1300, 1500, 33, or 88. Users of the test scores inherently translate to these comparative numbers. I don’t think anyone at Yale would believe 25% of their students today are smarter than 99.5% of the students in the past because of their raw 800 test numbers.

@RichInPitt Wow, a 1500 in 1982! That score is nothing to be ashamed of! Was your Verbal 800? How did you know all of the vocabulary words? Also, by “tough curves” are you referring to the June 2018 SAT?

[/quote]

800 was in Math. I found an old article that said there were 9 perfect 1600’s back in those days. Harvard alone had >350 applicants with 1600 in their lawsuit data (so a couple of years back). The percentile argument falls apart at the test ceilings - I’m glad my D has AMC/AIME scores to differentiate.

Ran out of editing time…

My comment about “tough curves” applies to various discussions in various places, but I think the June SAT was one particularly discussed. If you got a 700 with only 4 questions wrong, CB throwing 8 impossible questions so you get 12 wrong, along with everyone else, would still yield a 700, because the tests are normed/equated and relative. Saying you got “cheated” because you missed 6 and got a 720 last time is misunderstanding how tests are scored.

@RichInPitt If the previously mentioned concordances are indeed accurate (I’ve heard some dispute their accuracy) then this new SAT is absolutely horrible! Whoever heard of a 200+ point increase from different forms of a test? I’ve constantly heard the media tout that the new test is astoundingly easier, but 200 extra points jumps the gun.
Also, the reason people complained about the June 2018 tests was because the College Board either reused questions from a different administration or had people taking the experimental Section 5 serve as indicators for the questions’ difficulty levels. College Board should be able to create a fair test with a decent curve as they have for tests from early 2018/late 2017.
@marthastoo What do you think your 660 V/720 M would be today in 2019?

College Board publishes the concordances between different College Board tests, so I find it unlikely they’re incorrect. There were some issues with early SAT to ACT concordances, which were based on extrapolation and estimates, as the two organizations didn’t play well together. They finally published a joint table last year, and the early estimates were indeed a bit off.

I’m not sure I can blame the tests. Students kept getting lower and lower scores, so to keep the desired score distribution (mean and SD) and prevent too many students in too small a range, they had to re-scale the tests. 200 points is for one specific test range over a 20+ year period. It’s not the test’s fault that students do poorly on a statistically-equalized basis vs. earlier students. They could have left it alone and had today’s average SAT score be 900, but the optics would be poor and students might stop taking their test. Given the intention of relatively comparing applicants in the same high school year, it doesn’t really matter.

Astoundingly easier is a relative term. There are only about 0.03% perfect scores and others are statistically distributed to a relatively normal curve. As long as the test doesn’t have a ceiling that impacts a significant number and the results provide differentiation, I’m not sure what “too easy” or “too hard” really means.

The may have been complaints that you mention, but the vast majority of the press that I saw (and read on CC) was along the lines of “I got 10 wrong last year and got a 700, this time I only got 5 wrong and still got a 700. That’s not fair!”. As if it’s a fixed scale of right/wrong = score. In fact, this shows the student performed exactly the same, relative to the test population, which would be expected.

@RichinPitt I thought that the College Board only redesigned the test, and the score increase was natural (the only re-centering I’ve heard of is from 1995). I’ve also heard that the two groups of students talking the old SAT and the new SAT were different in terms of ability, with smarter students taking the older on and dumber students taking the new one. That might make the concordance inaccurate, and a much more precise concordance could be achieved by having the same group of students take the same test, although that might be hard to achieve. What do you think the concordance would look like then?

In any case, it is quite depressing for myself and other students to know that our scores are worthless (my 1440 would be a 1390?). As a result, I don’t use the concordances and instead use percentiles (i.e. a 670 English is in the 90th percentile, a 650 Critical Reading is in the 90th percentile, and a 660 Verbal from before 2005 is in the 90th percentile).

One question: I remember that for the new/old PSAT there was a preliminary and an updated concordance offered, with the updated one being much harsher. Do you believe that the updated one is much more accurate than the preliminary one? Why?

My understanding is that the 2016 concordance tables for New SAT vs Old SAT and New SAT vs ACT were both based on the 2015 study data that proved inaccurate. By the time that became clear, when the joint 2018 SAT ACT tables were published last summer, the New vs Old tables no longer had relevance for college admission and no update of New vs Old tables was planned, if I recall from the College Board representative interviewed in the NACAC webinar last summer (should be available online someplace).

If the College Board’s difficulty determinations reported in the SAS are accurate, tests over the past school year have been measurably “easier” (for example, the number of hard math questions was reduced nearly by half) compared to the tests during the first two years of the New aka Redesigned SAT. At some level, differences in difficulty will create issues with standardization. According to College Board’s technical manual, the equating process requires that tests be as alike as possible in difficulty. However, the differences between March 2016-May 2018 and June 2018-May 2019 (excluding test forms that were reuses from the past, like Aug 2018) were entirely intentional rather than incidental. I wouldn’t know what constitutes a difference that is significant enough to impact equating quality; it would be interesting to hear from a psychometric expert on that. I do not know whether anyone has compared difficulty differences between New and Old - that would be interesting.

“My understanding is that the 2016 concordance tables for New SAT vs Old SAT and New SAT vs ACT were both based on the 2015 study data that proved inaccurate.”

How was the data inaccurate? Were the two testing groups much too different to be compared?

“By the time that became clear, when the joint 2018 SAT ACT tables were published last summer, the New vs Old tables no longer had relevance for college admission and no update of New vs Old tables was planned, if I recall from the College Board representative interviewed in the NACAC webinar last summer (should be available online someplace).”

I agree that for the new SAT/ACT tables, the old ones are useless, but how would this affect the old to new SAT concordance tables?