SAT concordance table - compare old and new SAT scores

@Connections: It is true that there are plenty of good test optional schools. However, what is the likelihood that a student who doesn’t submit test scores receives a merit scholarship? Who needs a merit scholarship? Middle class kids (esp kids with parents who own a small business) who may not have the money for test prep or college counseling etc.

Test optional schools may be motivated by other reasons that are not about the students as this article from the Washington Post describes: http://voices.washingtonpost.com/class-struggle/2009/07/what_the_sat-optional_colleges.html

To sum up the article, the author points out that Test Optional schools may use being Test Optional to raise their avg SAT scores to improve UNWR rankings, bc it is usually low scoring kids who don’t submit scores.

I just don’t see an easy way for students to avoid the stress of these tests. I’m sure the new tests, with new expectations certainly add to the stress.

I would hope that US News would discount the SAT score of test-optional schools. Taking them at face value would suggest a gross misunderstanding of how to use the reported college scores.

@brian-scorebeyond Brian, when you refer to the shift in “percentile performance”, are you referring to the percentiles of the actual population of students who took the test in March, or are you talking about a population from some prior College Board study?

Just focus on the percentiles and not the numbers. There’s no way they can inflate percentiles. While a 1400 on the new test may be a 1250 on the old test, 99th percentile is 99th percentile. It’s really no big deal.

Hah, you’re not nearly cynical enough… :slight_smile: If the percentiles were among the people that took the same test, then yes. But they’re big on “research populations” and other imaginary groups. For the PSAT, the percentiles were grossly inflated… So it’s hard to trust much of anything from CB, TBH! :frowning:

@annana @thshadow I’m not trusting CB, either. When new SAT scores start flowing in to admissions offices, the real percentiles from actual test takers will become more apparent, and the concordance song-and-dance will become less relevant.

@bucketDad the whole new SAT thing just seems shady, I think I’ll probably just take the ACT (unless I qualify for NMF which makes the SAT compulsory)

Good idea @annana . The college board had made an absolute mess of this. Focus on the ACT.

@thshadow

Coleman gave every signal that he would do this when that long piece was published about him in the NYT and the changes were previewed.

@hebegebe, US News does discount the scores from Test Optional schools.

“If less than 75 percent of the fall 2014 entering class submitted SAT and ACT scores, their test scores were discounted in the ranking calculations. This policy was also used in the 2015 edition of the rankings.”

http://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings?page=4

@epiphany - any idea why? I still don’t get it…

@thshadow

He said he wanted to make college more accessible to all. He’s on an egalitarianism campaign.

@theshadow To make the lower scoring students feel like they will do better on the SAT than the ACT…therefore choosing the SAT over the ACT. This is like colleges inflating the sticker price of tuition and then giving discounts back to 80% of students. Even though you know the full price is bogus, it makes you feel good to go to a more expensive school at a discount. Similarly, even though you know that your 1200 is not as good as an old 1140, you feel like you did better on the SAT… especially since its less time constrained.

And if he’s the Common Core guy, doesn’t he have a vested interest in the SAT “validating” that Common Core “works”?

I do think that in addition, yes, most of it was motivated by more money for CB. And I think all his talk about egalitarianism was a cynical rationalization for what @suzyQ7 said.

That NYT article was written about 2 years ago and it doesn’t address any of the issues pertaining to norming the test. It does detail Coleman’s philosophy and goals and why he thought there should be an overhaul as well as the issues with prep inaccessibility for disadvantaged communities and income levels but no discussion about changing the curves (in fact, the numerical scales haven’t changed). Coleman has been known, and the article mentions, that he can be pretty harsh about the “feelings-based” descent of modern education into the realm of mediocrity. If they really wanted to make the everyone feel better about the SAT they could have just tweaked it a bit and rescaled the thing, rather than go through an entire revision. THAT would be truly cynical, of course.

But @epiphany might have a point in that the weird “inflation” may be a by-product of Coleman’s approach in this whole redesign. This is obviously a very different test from the old SAT (would that tempt the College Board to pay less attention to how the new norm actually compares to the old one?). They were clearly on a tight timeline (perhaps in a rush to get this out?). Finally, Coleman’s confidence a couple years ago in how diligent and “fact-based” they were all being belies one very crucial fact: that there are serious problems with, at minimum, the PSAT percentiles. So while they very well could have built higher scoring into the test deliberately (as some test prep guys are maintaining) they could also have just designed a really crappy test that didn’t go through sufficiently rigorous beta procedures, and by the time they realized this it was simply too late to fix it.

Related topic: College Board used the analogy of “Fahrenheit vs. Celsius” to describe measurement differences between the old and new SAT, which seems pretty poor given that, UNLIKE the various measurement scales for temperature (derived by different people under different circumstances), CB created both tests and is using the same scaling. Is this pretty much all they’ve said so far? They need to come up with a better explanation or else credibility will further decline.

That IS my point. (Your first paragraph wasn’t necessary and in fact I would argue with some of it because in many ways Coleman doesn’t know what he’s talking about and he is NOT an educator – not a good one, not a bad one, not a neutral one. He talks a lot about education and testing but knows little about either and pontificates about both. I consider him narrow-minded and not by any means the kind of expert who should be heading up the CB, as much as I dislike the organization.)

Who CARES that the NYT was written 2 years ago? Unless he’s gone through a transformation, what was revealed in that article was (1) his ignorance of many things, (2) some of his motivation, both directly and indirectly, (3) his own personal biases, which are many.

The one area I agree with is the whole issue of rushing the test. It was entirely unnecessary, and I still think that in the verbal areas, the old SAT was a better test of college readiness, however imperfect it was indeed and how idiotic the essay portion. The rushing of the new test may have affected norming of it (as well as affecting lots of other things, such as quality control), and it’s pretty clear that the rushing was precipitated by the historic surpassing of the SAT by the ACT. The new essay portion is the only good part of the new test.

@bucketDad I think he was refering to the performance of students at any specific percentile. the percentiles haven’t shifted as much as the scaled scores attached to each percentile. at least that’s what I “interpreted” from his post, which is gone for some reason. it seems the percentiles are never determined from the performance of students on any one SAT but over a cumulative sample and projection. maybe it includes March testers, maybe it includes only previous samples, maybe it includes both
http://wisdom.scorebeyond.com/prep/how-to-interpret-new-sat-scores

And the number of schools that previously required the essays and are dropping that requirement continues to rise. Seems clear that they are losing confidence in the worthiness of the results. And who can blame them.

@triedntested You said “the percentiles haven’t shifted as much as the scaled scores attached to each percentile”. I think the CB would like for us to believe this…that a score on the March test corresponds to a concorded old SAT score with the same percentile. Unless that concordance was performed with the March test takers alone (and not a bunch of experimental testers), I’m not sure I believe it.