^^ I agree. I just think that each student made up their own mind on which test was the best one for them based on their own reason. For my kid these were the key considerations:
Needed 3 good SAT Subject Tests for Georgetown. Given the British Sixth-Form schedule and the need to learn the SAT material from scratch, the best time to do Subject tests was autumn of Junior Year.
The opinion that the truly smart kids would do well on the new SAT as there were fewer chances for kids to overprepare and take the tests multiple times to superscore (although international kids are at a slight disadvantage as the March '16 SAT was not given overseas.
The numerous cheating scandals on the old SAT which made certain sittings dubious
The ability to use the '15 PSAT as a practice
The ACT was not an option given the much higher number of questions per minute. The SAT just was a better fit
I don’t think that we can ever know whether the old SAT kids were smarter than the New SAT kids v. the ACT. It does not matter anyway.
Don’t forget that the new SAT had few opportunities to practice the new format and more importantly SUPERSCORE.
It makes sense that the new scores would be lower rather than higher.
@Akqj10 : I think your numbers must be slightly off, because I know someone who got 5 wrong out of 58 (none omitted) on the new SAT Math section and got a 770.
That really depends on whether you think NMS is important. If your goal is to go to University of Oklahoma on a full scholarship, it’s really important. If your goal is to go to Carleton (they knock a few grand off for NMS) it’s kind of important. If your goal is to go to any of the top Ivies, Stanford, MIT, Chicago… then it’s not important at all.
Still, I grant you that many kids are unaware of how unimportant NMS is at top schools, and some therefore may have opted for the new SAT (often combined with ACT)—this argument is at least true in some cases, and has the benefit of logic behind it, unlike that other lady’s wishful thinking. But my argument was never that all of the top kids ditched the new SAT; just a healthy amount. If, e.g., half of the top kids opted for old SAT or ACT (compared to the previous year), that has a significant impact. As I’ve said twice before, a 36 score on the ACT was a 1 in 12,000 score in 2001, but only 1 in 935 or whatever last yr. Not because the test got easier, but because the strength of the testing pool changed, due to increased recognition among top students of the “legitimacy” of the ACT at top schools. It would be naive to think that many strong students in class of 2017 weren’t concerned about the new SAT and didn’t opt for old SAT and/or ACT.
There’s been plenty of speculation about what tests the stronger students chose…what about the weaker students? What test would a poor test-taker choose? The brand new test with no preparation materials, or the older established exams? If I needed a crutch…I would have avoided the new SAT.
Re: how important was NMS - For us it was absolutely vital. S is now a likely NMF, but if he hadn’t scored well enough, we’d be looking at the subset of autostats-for-full-tuition schools. Ivies and the like won’t be affordable for us. Neither are the UCs or CSUs. We need full tuition or better, and we’ll take it where we can get it.
Aside from the NMF schools, Tulane is the only private so far where I think S has a decent enough shot at full tuition to justify the application fee.
@DiotimaDM I think he gave you a great answer that is worth quoting in full (emphasis added):
[quote]
This is a topic I’ve looked at closely the last couple of weeks as we developed a presentation for college counselors. In addition to the schools you mentioned, I looked at 2020 and 2021 ED data for GaTech, Dartmouth, and Georgetown. I’ve also analyzed PSAT and SAT data for multiple classes and sub-groups.
My evidence is that it is not a problem with the concordance, per se. In other words, all of the pure testing evidence shows the expected increase in scores (at least within a reasonable range). It’s not a familiar role for me to defend College Board, but it seems that they did a reasonable job. I’ll add the caveat that the place on the scale where it is hardest to verify the success of the concordance is in the 750-800 range that comes into play at many of these colleges.
We are left with student behavior and college behavior/policies to best explain what we are seeing. I’m not sure that we’ll ever be fully able to explain things without a research study involving colleges and the College Board. I doubt that will happen, because the old SAT is a non-issue going forward.
Some parts of the explanation are less speculative than others. First, there has been a significant shift to the ACT in the applicant pool. Among the high scoring students at these schools, it represents the biggest shift in history. Similarly, there was a burst of activity of students taking the old SAT pre-March. Few of the colleges provide a distribution of results for both the class of 2020 and 2021 across the different tests. If we assume that there was a bias among high scoring students toward the ACT or toward the old SAT, then we would expect to see lower than expected new SAT scores. This bias would also be more likely with ED/EA students, as they often want to get testing done early, and the new SAT represented a real problem with that plan. There is also a chance that the self-selection bias led to sub-optimal decisions in testing patterns and in preparation. Did the student who would have tried to go from a 700/700 old SAT decide not to retake with a 730/740 new SAT? And even if they wanted to, did they have the time? There is also the possibility that students’ preparation for the new SAT was inadequate. At minimum, they didn’t have Oct-Jan junior year tests to inform their new SAT decisions.
Score choice and superscoring effects would be interesting to parse out. The latter certainly worked against SAT takers this year. ED/EA applicants were probably fairly evenly split between old SAT and new SAT testing, yet their scores are in separate buckets for superscoring. ACT early testers and ACT late testers had the opportunity to superscore all of their dates. The impact of Score Choice is less clear, but it’s yet another place for sub-optimal decisions. Did students release the “right” scores?
Also unclear in most cases are the definitions used by colleges. If a student submitted old and new SAT scores, how did colleges report them in their press releases? If they based it on “best scores,” were those best scores determined via concordance?
The area of behavior that we are all most intrigued by is how colleges thought about the new scores. Did they, in a sense, misuse them? Did admission officers, for instance, retain hard-coded pathways in their brains that treated everything above 750 as interchangeable? Some have speculated that because colleges did not explicitly use the concordance — Georgetown and UVa being obvious examples — that this automatically disadvantaged one group or the other. That’s not necessarily the case. Some colleges choose not to use an SAT/ACT concordance, yet they are able to come to reasonable conclusions through intra-group comparisons.
I’d like to say that this will all be sorted out with Regular Decision, but I’m certain that it won’t be. It looks like the class of 2018 will represent the first opportunity to see where ACT and new SAT scores really fall out in the new landscape.
Reply
The increase in the no. of students getting perfect ACT score increased in part due to students spending more time in preparations in the recent years due to increased competition. It is possible that in 2016, more students, scared by people like the “navigation” guy, flocked to ACT (or OSAT), but then we also see the news headings like “ACT SCORES DROP AS MORE TAKE TEST” in 2016. I think we - including the navigation guy who had more questions than answers in that blog - are mostly talking theories. The evidences we have are the reports from 5 or 6 colleges suggesting that the concordance were sort of ignored, which even the navigation guy acknowledges (“Georgetown and UVa being obvious examples”). Assuming these universities weighed test scores heavily and followed the concordance table, then why didn’t they dig deeper into the ACT pool rather than the new SAT pool. On the other hand, if these universities were selecting ‘holistically” then how did the NSAT scores ended up smaller than the concordance table predicted. Are these pools somehow equal? Unfortunately, we don’t know the number of students admitted from each pool and so far, no college has come forward stating that they used the concordance table strictly, even implicitly. It is a known fact that CB was pathetically wrong in their 2015 NMSQT percentile prediction. So, one may wonder if that lead to the overcorrection in coming with the concordance table. If not outright incorrect, the CC tables are at least less useful since there are many ways to use the tables resulting in entirely different scores. I am sure CB will keep on tweaking the tests and some believe they have already started doing it and hence the release of two additional sample tests mid last year, which were significantly different than the previously released tests. I agree with the navigation guy on one thing: “I’d like to say that this will all be sorted out with Regular Decision, but I’m certain that it won’t be “
Looking back, how do you think College Board could have done better? Was there any way they could have had the students who took the old SAT in fall of senior year, take one test, and have all Juniors have no option but to take the new SAT. Or at any one point in time say all 2017 Graduates have to switch to new SAT? My history teacher used to say " we study the past, to understand the present, to prepare ourselves for the future". I did pay attention to something in school.
Even though some of us have had disagreements on this thread, I hope you all get positive results in the next few weeks. It’s obvious to me we all have children we can be proud of, and parents who really care about their future, all nit picking aside.
^ I think that the publishing and explicit promotion of the Concordance did everyone a disservice. Why couldn’t they have just published the percentile tables and be done with it.
To be honest, my dd did well enough on the new SAT plus her SAT Subject Tests. The thing that really irks me is that she was barred from taking the March 2016 New SAT. If you assume that was the most level playing field that would have been great. Instead, she took the test in May and June which was right at the beginning and end of her AS Level exams, respectively. At that point, the AS Levels have to be the priority. Overall this whole standardised testing business just seems to have too big of an impact and focus on college admissions.
I agree that there’s a discrepancy between what CB says should be happening, namely that NSAT scores should be higher than old, and the results various colleges are reporting on the ground, namely that NSAT scores seem to be coming in lower than the old.
What I’m not ready to comment on or have a personal opinion on is why there’s a discrepancy.
Is it possible that the concordance is flawed? Yes. (Most likely reason here would be too few administrations in the test phase and/or a test population that was too small or not demographically representative of real test takers.)
Is it possible that the new test is actually harder somehow, especially at the upper levels? Yes.
Is it possible that demographic shifts (e.g. to the ACT) are responsible for at least part of the difference? Yes.
Is it possible that lack of prep and/or limited superscoring opportunities are responsible? Yes.
What would it take to know for sure? Probably a large factor analytic study that would show some weight on ALL of the above factors - which is part of what That Guy means when he says we’ll never know, because nobody is going to do that study.
I don’t like that those of us with 2017 / 2018 kids are in this position, but the only thing my particular kid can do about it is to either A) retake the SAT (very likely), or B) take the ACT (not a chance in heck).
That’s cold comfort for sure, but one of the things we’ve been teaching the kid is that sometimes excrement happens and life isn’t always fair. And when excrement happens, sometimes you just have to say “Yeah, it sucks,” and keep moving forward.
So this? Yeah, it sucks. I wish we had a better idea, contextually speaking, about how strong my S’s score really is. That said, we don’t, and no about of teeth-gnashing and Google searching will change that. End result - my kid retakes the SAT. Not all that big of a deal in the scope of an entire life, you know?
Disclaimer: the above is only meant to apply to me and my kid. I am not trying to invalidate anyone’s feelings or advise a course of action for anyone but us.
Is it possible that the concordance is flawed? Yes. (Most likely reason here would be too few administrations in the test phase and/or a test population that was too small or not demographically representative of real test takers.)
Is it possible that the new test is actually harder somehow, especially at the upper levels? Yes.
Is it possible that demographic shifts (e.g. to the ACT) are responsible for at least part of the difference? Yes.
Is it possible that lack of prep and/or limited superscoring opportunities are responsible? Yes.
I agree. We’re never going to know, and arguing about one or two of the above factors won’t change anything and won’t get us anywhere. I would also suggest that even a “large factor analytic study” wouldn’t resolve things, since the individual psychologies of 1+ million students in the different testing pools would be impossible to know, much less quantify. For that reason, I still don’t believe we can assume anything about high-scoring students shifting to the old SAT or ACT, but the question is moot because the admissions decisions are rolling in. May the odds be ever in your kids’ favor!
I think what is more important to me is what happened in the AdCom rooms (like Peabody blog) and not some tutorial company that has been pushing students to go for ACT. e.g. UVa: "Way more students submitted the new SAT than the old, so I’m dropping the stats about the old exam” . In any case, even though NSAT is a far better test than OSAT, CB did a bad job in implementing it.
@akqj10, good point. I guess they could have started giving NSAT sooner, say from Oct 2015 (for 2017 batch) along with OSAT (for 2016 batch) and produce the final concordance table/percentile based on real data. I guess like @londondad said : “publishing and explicit promotion of the Concordance did everyone a disservice”.
On a separate note, one has to take concordance tables with a pinch of salt because some do better in ACT and others in SAT (& similarly NSAT vs OSAT). They measure skills somewhat differently (like comparing marathon vs 100m runners) and most adcoms know that.
^ With all of the chaos of this year, most kids next year will take both SAT and ACT if both scores are decent, send them both the colleges and let the colleges sort them out! This will probably hasten the decline of the SAT Subject tests as many schools either don’t require them or like Tufts will take the ACT instead.