December and March are very close together. In order to catch academic progress, the student would need to test more infrequently. Annually at most.
Also, what you are describing sounds like comfort with the test format combined with focusing on one section at time in each seating; in other words, a testing strategy.
Scores are not as comparable between institutions or even individual students based on the ability of students to strategize in this manner when taking the tests. It’s a different thing than overall improved performance.
It’s all a question of how you are using the tests. If your child’s goal is a Caltech or similar or they are chasing academic merit scholarships at selective schools that offer only a few, that test taking strategy makes a lot of sense. She’s obviously very well prepared for college. Best of luck to her.
Oh, that LIFE article is so fun! Thanks for linking to it! Fascinating to see which schools have reputations that are still intact vs. changed. The description of Carleton is still spot on 60+ years later. But not NYU or Columbia.
However, selecting more or less heavily for SAT (or ACT) scores does have noticeable effects – see the USC/UCLA and Alabama/Purdue examples noted upthread.
Weighting SAT more or less heavily could alter the percentage of high-SAT-discrepent and low-SAT-discrepent students admitted and enrolled at the college (discrepent meaning that the student’s SAT score is higher or lower than what the student’s other admission indicators like high school courses and grades would suggest a likely range for).
This is quite interesting that the ranking has a very huge role to play. Here your post maybe help me out a lot because there are few colleges which are comparable with my personal choices but I was not aware of them until today and now I do. Like I was well aware of Dickinson but I was not at all aware of Centre (KY). Thankyou for this amazing post and I am really glad to see how ranking could play such a huge role and at the same time doesn’t create much of a difference from a few ranks.
That’s something I’m curious about. I can understand why girls have higher GPAs on average than boys because girls tend to mature earlier and be more organized and diligent than boys, so why do they have lower SAT scores? You’d think girls would at least have higher verbal scores.
The relatively large gap between Barnard and Columbia cannot be explained by sex differences in SAT scores, which are relatively small (males outperform females on Math by 18 points, while females outperform males on Verbal by 5 points.)
Instead, it comes down to institutional priorities. Barnard has long advertised that they are open to students with lower stats on paper as long as they bring something special to the community.
Even after considering the 1995 recentering and 2016 redesign effect on SAT scores, the SAT score ranges listed for 1960 do not look particularly high for today’s “most selective” colleges. But perhaps that may be because many of the private ones had a higher percentage of hooked students with greater hook effects on easing academic standards for admission (those students would be the “gentleman C” students of the time), while the public ones were oversized relative to their state populations compared to today.
In 1960 students took the SAT once and did not prepare for it. That alone accounts for most of the difference.
I am actually astonished by your assertion that the lower SAT scores at Barnard v Columbia College is due to gender. It is far more likely due to a difference in the quality of the applicant pool and the school’s selection process. Plenty of smart girls get very high SAT scores, and girls are not bringing down the average scores at places like Columbia, etc.
https://files.eric.ed.gov/fulltext/ED562878.pdf says that female students are more likely to be HSGPA discrepent than SAT discrepent, while male students are more likely to be SAT discrepent than HSGPA discrepent.
These SAT test redesigns bumped the “middling” (relatively speaking, of course) scores into the upper range, so there’s a much higher proportion of students whose scores are now in the upper band. Since these “most selective” colleges admit applicants in this upper band, they see a big jump in average scores (even before TO).
Looking at the Life magazine article was definitely very interesting. If only the tuition prices were still in the $2k/year ballpark!
Returning to the list by SAT score in the first thread (and hopefully bringing the thread back to topic), these are some other thoughts I had while perusing the list:
Soka (CA) is within 5 points of Furman, Wheaton (MA), Hobart & William Smith, TCU, Marquette, Pepperdine, Loyola Chicago, and U. of San Diego.
#164 has 21 schools, some of which get much more attention than others: Pepperdine, Baylor, Texas Christian, Marquette, U. of Colorado-Boulder, U. of San Diego, Loyola Chicago, Auburn, U. of Alabama-Huntsville, DePauw (IN), Beloit, Muhlenberg, Gustavus Adolphus (MN), Wofford, CUNY Baruch, Elon, Truman State (MO), Kettering (MI), Cal Poly-San Luis Obispo, and Taylor (IN).
U. of Puget Sound (WA) and SUNY-Geneseo are within 5 points of the 21 schools at #164, as well as Clark, UC-Davis, Virginia Tech, U. Mass-Amherst, The College of New Jersey, and Rochester Institute of Technology.
Does anyone else want to share anything that struck them while looking at the list?
If I’m not mistaken, I believe that some people claim Soka (CA) is run by a cult
There are some unusual things going on there, regardless. I think the student body has a large number of wealthy Japanese international students whose families are acolytes of their movement’s leader. I’m not sure what accounts for the SAT scores.
The article you cited above looks like a perfect example of “correlation does not imply causation”. These sorts of biased perspectives are found throughout education, government, and society.
Why would be unlikely to be true for “elite” schools. Every study I’ve seen that compares gender balance of test submitters and test optional admits at test optional schools finds a higher rate of women among the test optional kids. The reason for this is test scores are more likely to be a relative weak point for women than men. For example, the Bates study lists the following ratios. I wouldn’t expect this pattern to change for “elite” schools, among non-athletes (revenue earning men’s teams may have greater admissions flexibility than women’s teams).
Test Optional Admits – 60% female
Test Submitters Admits – 48% female
That said, I don’t think test scores being more likely to be a relative weak point for women than men is the primary reason why Columbia averages higher scores than Barnard. Instead I think the primary reason is that Columbia College + Columbia Engineering is more selective than Barnard. For example, other stats show differences between Columbia and Barnard. It’s not just scores. Some example numbers are below from the 2015 IPEDS (OP article is from 2015). Columbia’s admit rate was 1/3 of Barnard’s admit rate.
Admit Rate: Columbia = 6.6%, Barnard = 20%
ACT 25 to 75: Columbia = 31 to 35, Barnard = 29 to 32