And you’ll be able to keep doing that. Bates, however, has had a different experience.
There are, of course, also many studies that find the opposite of Macaulay. And what that study does not measure is students who opt out of SAT/ACT completely due to SES or other issues.
But I think I’ve pretty much made my position here clear: I think the more test optional options for students, the better.
The more costs of testing (via, or instance, self-reporting of test scores) the better.
The less money spent on Princeton, the better.
The more SES leveling, especially at selective schools, the better.
I’m confident that the academic world will slowly inch toward more test-optional schools, if for no other reason that changing demographics for the next few years.
I think this will be a good thing for many kids and families.
If you are referring to the “No SAT scores” group in the years prior to going test optional, those are students who took the ACT instead. Colleges that require test scores give students the option to take either test. For example, in the year before going test optional, 91% of the class submitted SAT scores, and 34.5% submitted ACT scores. The facts sheet indicates that 100 - 91% = 9% had “No SAT Scores” for this year, presumably all of whom are in the 34.5% ACT submitter group.
Ithaca has a high admit rate of nearly 70%, with most admits who submit test scores having scores under 1200. I attended a predominantly white, typical public HS in upstate NY, the type that gets a huge number of applications to Ithaca, and the type where Ithaca gets the bulk of its student body. My HS had a ~80% acceptance rate to Ithaca, with the overwhelming majority of those 20% rejections being in the bottom quarter of my HS class. Students who had GPAs above the bottom quarter of the class were usually accepted, regardless of scores. Another poster in this thread mentioned his HS shows a ~99% acceptance rate to Ithaca with 296/300 accepted, and the few rejections having extremely low GPAs. Barron’s can call this highly competitive, highly selective, or whatever; but I’d be surprised if anyone else in the history of the CC forum has used these words to describe Ithaca prior to this thread.
Test scores make up a very small portion of the USNWR rankings, far less than things like graduation rate. Minor changes in test scores are unlikely to have much impact on USNWR rankings, and they may cause if decrease in ranking, if they result in a decreased graduation rate. According to Wikipedia, Ithaca has ranked among the top 10 Regional North consistently in every year since 1996, consistently not far from their current #8 in regional north.
I noticed that many colleges had more ACT than SAT scores submitted this past year. Some in surprising locations (heart of SAT country). I think many did it because the SAT was new that year, but ACT testing has overtaken SAT testing nationwide already.
IDK what that means for those who think the SAT is measuring intelligence but the ACT isn’t.
@OHMomof2 - none of these tests explicitly measure intelligence but each is highly correlated. You can see this by looking at the distributions of scores by gender, race, SES, etc. By examining the means and variances, you can see that they are all correlated with the same thing. For instance, on all tests, females will score a little lower than males on quantitative aspects and with lower variance, the familiar gaps will appear among races/ethnicities again with expected higher kurtosis in the distributions in the expected groups (Asians, Blacks), etc. This stuff has been researched to death; there really is no controversy here. Perhaps @SAY will chime in.
You asked upthread a question about how the ACT can be measuring intelligence when the same kid could score 6 points difference within 2 years or so. If you provide some more details, I could give you my perspective with some specificity. But a few general observations. First, the ACT is not intended to be a direct measurement of intelligence, so you will expect more variance at the individual level than in the aggregate. Second, all tests have measurement error inherent in the tasks presented because some of the questions (or reading passages) just happen to “click” with a particular test taker - the first score could have been a little lower than “true” ability and the second (or third?) a little higher, so that the true spread of implicit ability might be smaller than appears. Third, and similarly, there are idiosyncratic environmental effects at the individual level which wash out in the aggregate (for instance, the test taker feels a little sick but not enough to cancel the score the first time, glucose levels are different at different times, etc.). Fourth, especially if we are talking about females around adolescence, actual intelligence matures and becomes crystallized roughly between 14 and 17 (boys a little later); of course, this is all on average. The test taker may literally have grown smarter over two years. Fifth, the test taker is likely in that sweet spot in which practice and true test experience can result in gains - this is likely to be seen most strongly between about 115 and 130 IQ; much above this level and the test taker would likely be within 6 points of the maximum with even minimal preparation on the first try (we regularly see 12 and 13 year olds with 36 on at least parts of the ACT), while much below would typically mean that the test taker lacks the focus and discipline to commit to such a large improvement (although still possible). Those are just a few ideas, I am sure there are others.
@Data10 - “If you are referring to the “No SAT scores” group in the years prior to going test optional, those are students who took the ACT instead.”
You are obviously closer to the data than I am, but I find this implausible. Here are the raw numbers of “No SAT” enrollees at Ithaca College beginning in the indicated entry year. The official change in policy to “test optional” is noted after 2012.
It’s pretty clear to me that increasing numbers of ACT takers could not account for the stepwise increase and subsequent stabilization in “No SAT” between 2008 and 2009. The format change in the SAT (new SAT) is irrelevant here as it didn’t happen until 2016. Perhaps I am missing something?
Here are the aggregate numbers of black and Hispanic students at Ithaca College (all undergraduate years - unfortunately, I didn’t extract the freshman numbers) for those same years.
I look at data all the time, and I know there are many on this site who do the same. It looks to me like a policy change happened in 2009, which resulted in an increase in black and Hispanic enrollment (this change rolled through the next four years as increasing representation in the freshmen classes rolled through the grades). Black and Hispanic enrollment looks to have stabilized at approximately 14.5% (target?), which represents a more than doubling of share since 2006. Over this entire period, Asian enrollment is roughly stable between 3 and 4% (perhaps slight trend increase) while Native American is vanishingly small in all years.
I have no issue with the idea that Ithaca is not very selective, despite its rankings. But if that is the case, why the need to create a document like that report? And if the primary concern is eventual graduation, wouldn’t it have made infinitely more sense to examine the students who WEREN’T around at the end of the 6th semester (or even at the end of the 2nd or 4th) to see if their SAT scores had had any predictive ability as regards their failure to continue? This is just the most obvious of the glaring methodological errors in the report, well known to all practitioners as the problem of “survivorship bias.” Here is a fun digression on the subject: http://clearthinking.co/survivorship-bias/
In some ways, the model errors and variable conflations and confounds (including the backfitting of “made up” data) are even more egregious, but those are more technical and probably only interesting to math geeks like me
I’ve listed the CDS reported percent SAT and ACT in each year at both Ithaca. Note that the number of “No SAT scores” reported in the fact sheet matches within ~1% of the CDS reported percent who did not submit SAT and instead submitted ACT. The slight <1% may relate to different classification of special students, such as not first time freshman, and not full time students.
Most colleges in New York State show a similar trend with decreasing submitted SAT rate between 2006 and 2012, as reported in their CDS. For example, Cornell’s rates are below.
The New York State ACT participation rate is listed for the same years below, which shows a similar trend. As more students take the both the SAT and ACT, more students are likely to find that they do significantly better on the ACT, and it’s to their advantage to not submit their SAT scores, so the submitted SAT rate decreases. There is also a larger portion of students that only take the ACT.
New York State ACT Participation Rate
2006: 17% took ACT
2007: 21% took ACT
2008: 23% took ACT
2009: 25% took ACT
2010: 27% took ACT
2011: 28% took ACT
2012: 29% took ACT
This same trend occurs nationally, with an increasing percent of students taking ACT during this period. Today more students take the ACT than SAT across the full United States. However, New York State lags behind and is still primarily SAT.
I might argue that it isn’t just score context, that the very idea that Khan “practice” can bump a score 100+ pts as Coleman has claimed calls into question the reliability of scores within the narrow bands at the top, the very places where elite colleges are looking and attempting to split some hairs. I’m in favor of testing and perhaps there’s no perfect test, but the long-held reputation of the (old-old-old?) SAT may lend more credence to student score differences than the New test may deserve.
The older SATs claimed to be less coachable / preppable, but there was still SAT-specific test preparation that could make a significant effect. For example, a test taker who knows the expected value of guessing (with the incorrect answer penalty that the SAT used to have) knows that guessing helps on average even when one possible answer is known to be incorrect. The older SATs used to be mostly English vocabulary, so there were SAT prep books that had large lists of words that were found on old released SATs. On the math section, test prep books and courses would teach things like plugging the answers back into the question, which was often faster than trying to solve the question the regular way.
Does this Khan practice-effect essentially render practice a requirement for many students to score at their highest potential compared to other students? I suppose it depends on how prevalent practice is. It seems as though the advertising of Khan by Coleman and the College Board could potentially make prep more widespread - it would be interesting to see data on proportions of kids in various score ranges who prep (with Khan or anything else) and whether that has changed in recent years.
Came across this today and thought of this thread. UTD had an auto-admission for SAT 1200+ and found that it wasn’t a great indicator of college performance, so will be using a more holistic measure (still considering test scores but not only test scores).
Looks like UT Dallas now has top 10% as the only automatic admission path, unlike many other Texas public universities that use a combination of rank and test scores for additional automatic admission paths. About 32% of frosh come from the top 10%.
Curious, are there stats from the test-optional schools what the admit rates are for test optional v those who submitted. Wondering if this just a way for school to get more kids to apply and appear more selective.
@vhsdad when we visited to Wake Forest, the presentation included a portion on their being test optional. The admissions officer was very clear that when they say test optional they mean it. He said, “No, it won’t hurt your chances of being admitted.” He said it is exactly what it is.
It certainly did not come across as not being an authentic option. When I was listening to the presentation, it wasn’t as if I was biased in that direction for my student bc she had high test scores and was only visiting bc of her grandfather who is an alum. (She didn’t even apply.) But, I left there thinking that if I had a student who didn’t want to submit their test scores that it wouldn’t be a problem.
I’m happy about this trend. It helps highly capable students who suffered a medical condition during high school years and hence their SAT/ACT suffered. I think it is wise to put all available energies into completing high school well and being accepted to colleges without penalizing based on a moment in time standardized tests.
Funny, I would have thought just the opposite. If someone recovers from a medical condition they can retake the SAT/ACT, but they usually don’t get a do-over on classes already completed.
^Boggles my mind that the WSJ article fails to mention the history of the SAT, the change in what it measures over the last two decades, the Redesigned SAT that debuted in 2016, and the fact that College Board now touts a study claiming 100+ pt increases due to practice on Khan.
They are relying on old research for an SAT that no longer exists. Naturally, the article is adapted from their new book $$
It’s true that the ETS says the SAT no longer tries to identify innate ability. That’s a shame, because one of the primary benefits of intelligence testing is that it offers the promise of identifying ability wherever it may reside. After all, intelligence is distributed much more democratically than wealth and income. ETS should be front and center making this argument, rather than retreating from it because of political pressure.
Nevertheless, it’s clear that despite the changes and ETS’ dancing around the issue, the new SAT will be highly correlated with intelligence, just as the old one was, and just as any cognitively sensitive test is always going to be. If you look at the latest results from the SAT (https://collegereadiness.collegeboard.org/pdf/sat-percentile-ranks-gender-race-ethnicity.pdf), you will see the same gaps by race, the same differential ability in quantitative measures between males and females, the same greater variances for males and for whites generally, the same kurtosis in the score distributions for blacks, East Asians, etc. that we have seen for decades in the SAT, ACT, LSAT, UKCAT, MCAT, GRE, and just about every other test (including decades of IQ testing) I’ve ever seen.
Whatever the SAT measures has not changed appreciably over the years, although as I implied it will not be as sensitive any longer to identifying those aspects of intelligence that are less sensitive to coaching and other environmental advantages. Which is a shame. Chalk up another victory for the privileged class.