Make a Fearless Prediction: How will colleges use the SAT/ACT for the class of 2026?

Problem with using AP scores is that most students do not have many before 12th grade (and those that do may not have the same ones as others).

Making greater use of AP scores would mean going to a British-type system where your admission is contingent on getting a sufficiently high AP score associated with the AP course you are taking in 12th grade. Or perhaps compressing the college admission process into the summer (AP scores available 7/1, colleges give admission decisions 7/15, admitted students must decide 7/31, start class just a few weeks later).

1 Like

It depends on a variety of factors including the specific college. Regarding Cornell specifically, prior to COVID-19, Cornell used to state the following on their website about how they used test scores in admission decisions:

First and foremost, we look at your high school record, the rigor of your coursework, your grades, and your rank-in-class (don’t worry if your school doesn’t rank – that’s quite common). The personal application you write (essays, extracurriculars, etc.) is also a very important piece of Cornell’s selection process. Standardized testing plays a role, but probably not as much as you think.


Standardized test scores are only a small part of your application for admission. Let me say that again, standardized test scores are only a small part of your application for admission. SAT or ACT scores represent one Saturday in your high school career


The little information available about the actual decisions seemed to be reasonably consistent with above. For example, several years ago I looked up Cornell admit rate by ACT score among Parchment users (largely self reported) who had a high GPA + high course rigor. There didn’t appear to be much difference in admit rate among different scores for applicants with >30 ACT + high GPA + high course rigor. Several other colleges on Parchment did not follow this pattern to the same degree as Cornell, such as Vanderbilt.

The upstate NY HS I attended gets a huge number of applicants to Cornell. Naviance for this HS shows a similar pattern to Parchment above. According to my HS’s Naviance, the majority of Cornell applicants with a 96%+ UW GPA were accepted regardless of SAT score, although the 96+% UW GPA combined with <1300 SAT sample size was very small. There was only a small increase in acceptance rate as score increased among this high GPA group. In contrast, admit rate dropped dramatically for below 95% GPA at all SAT ranges . There were also some applicants were rejected with as high as 99% UW GPA + 1600 SAT. No stat range appeared to guarantee acceptance among applicants from my HS.

I realize self reported makes precision low, different HSs show different patterns, and there is a lot of self selection (lower/higher score kids might be more likely to apply to particular Cornell schools and/or be more/less likely to be hooked). However, the point is that while the available sources are far from ideal, the limited information that is available suggests Cornell admission decisions did not follow scores well. I do not doubt that scores really were only a small part of the application at Cornell prior to COVID, as the Cornell website claimed.

1 Like

You are assuming that AP tests are “properly designed objective test[s]”.

As a college professor who has taught students who got credit through AP testing, I question that assertion.

But in complete honesty, I question the deeper assertion that there can ever be a single “properly designed objective test” even for a single subject that would work across all groups of students. I mean, I know my own exams don’t do it, which is why I’ve shifted as much of my teaching to project-based learning as possible—but that requires multiple assessment points.

Which in my utterly unhumble opinion is better, but it moves us away from the convenience that @ucbalumnus (correctly, I would argue) notes is so seductive about tests like the SAT and ACT.

1 Like

There is a false binary underlying your reasoning here—you’re presenting it as an either/or, where an objective test is either used or not used.

Much better, I would argue, is an array of assessments, so that you can get not only the regurgitation of facts but also reasoning ability, and analytical skill, and creativity.

“Wait—multiple assessments?” you ask. “You mean like the ones that go into a student’s GPA?” you inquire. “Yes, precisely”, I answer, “which is why colleges nearly all rely on GPA much more than single-point standardized test scores.”

p.s. What objective test would be necessary for someone going into a career as a novelist or a sculptor? Serious question, that.

1 Like

Pretty much any profession that does not require a licensing exam would be an answer to the question posed by @jasperfant . For example:

  • Producing art and literature, as noted above.
  • Computer software and hardware design.
  • Engineering under the industrial exemptions from PE licensing.
  • College faculty.
  • Journalism.
  • Many “general business” jobs.
  • Politics.

Obviously, in the course of education for some of the above, someone may encounter tests. But these are not necessarily standardized or objective tests that everyone going into the profession is required to take.

1 Like

I’m struggling with how a school can walk away from a data point like standardized test scores. Schools can weigh that data point however they like, but it seems like it would always have some value because there is years of data and it’s standardized.

Perhaps schools are using their own data from past years to get to the same spot? At my kid’s school about 30 people apply to Cornell every year, so they should have a solid database to work off when they look at my D’s transcript.

I can definitely believe that GPA combined with rigor is a better indicator of potential success, but from what I have read and seen, there is continual, climbing grade inflation that somehow needs to be adjusted for IMO. AP exams with test scores would be suitable it seems but as many have pointed out that is not always available.

From what we have seen this year and last year, I do not think SAT/ACT were playing a massive role, at least not to the extent I would have imagined. T20 colleges have a bar for standardized test scores that is much lower than most think should be required and once you are over that bar, there seems to be a randomness to it. Perhaps it comes down to you were a soccer player and your AO was too, etc.

On CC, many like to say that we only know a small part of the application for certain which is true. But, most kids have a very good idea of how well someone writes, know the teachers and can guess pretty well who will get the best recs, etc. These kids, pre-Covid at least, were around each other for years and for the top kids, there are not that many secrets.

I was told this past weekend by a friend that at her daughter’s HS (among the best public HSs in NJ), practically “everyone” gets an A if s/he showed up for classes this past year. It may be an exageration, but still


1 Like

Yes, SAT started out trying to be primarily an aptitude test, but it has gradually moved away for that objective, at least partly due to the competition from ACT, a decidedly achievement-type test. Achievement tests are much more preppable. As a result, students can improve their test scores much more easily by prepping, and more and more of them gravitated toward ACT over time. With both ACT and newer SAT as achievement tests, there’s less of a need for other achievement tests such as SAT subject tests.

I suspect that your definition of “weedend” differs from mine. :sweat_smile:

2 Likes

:stuck_out_tongue_winking_eye:

And yet even before the current crisis, a growing number of colleges were doing precisely that.

Some colleges may have been dissatisfied with the current options of US standardized testing for admissions (SAT/ACT not sufficiently predictive/useful for their students, others being required would be too much of a barrier to applicants), rather than being opposed to all standardized tests.

2 Likes

Exactly. Close to 1,000 4 year schools were test optional before the pandemic.

Columbia just announced they are staying test optional for next year’s applicants: https://undergrad.admissions.columbia.edu/content/one-year-extension-test-optional-admissions-2021-2022

1 Like

The “randomness” likely involves non-stat factors for which the student has little visibility being an important contributor to admission decisions. This is more than just if you were a soccer player and AO was too. For example, a kid might have some idea if his LOR is good/bad, but I expect they rarely have a good idea about how their LOR compares to others within the national pool of applicants.

Regarding LORs, the Harvard lawsuit analysis found the combination of GC + teacher LOR ratings were the most influential analyzed component of the application
 meaning that if the GC + teacher LORs were removed from the model, the predictive accuracy decreased by a greater factor than occurred when removing any other analyzed factor. Harvard reader guidelines list the following LOR rating scale. Roughly 3/4 of admits were among the top 1/4 of applicants in LOR ratings. Compared to roughly 1/2 of admits were among the top 1/4 of applicants in AI stats (AI includes GPA + SAT + SAT II subject tests).

1 . Strikingly unusual support. "The best of a career,” “one of the best in many years,”
truly over the top.
2. Very strong support. “One of the best” or “the best this year.”
3+ Well above average, consistently positive
3. Generally positive, perhaps somewhat neutral or generic
3- Somewhat neutral or slightly negative.
4. Negative or worrisome report.

1 Like

^That the LoRs are among the most influential in an application isn’t surprising. On the one hand, they’re arguably the most objective elements in some ways. And on the other, few get the top ratings because of their recommenders’ lack of familiarity, time constraints, or reputational concerns.

However, despite their importance, I still don’t think LoRs are what’s driving that “randomness” (I prefer the term “unpredictability”), because the same LoRs were presumably sent to all the schools an applicant applied to, and often, s/he got some mixed results.

LORs were are arbitrary example. At highly selective colleges, there are many other influential contributors to the decision besides just stats + LORs. Considering these many other factors does not mean the decision is “random”, but it may seem “unpredictable” for the overwhelming of applicants who do not have a good sense of how they compare to the applicant pool on those factors, or often how the colleges use those factors in the admission decisions.

Regarding different results for different colleges, different colleges use different admission systems that emphasize different criteria to different degrees. They also have different applicant pools, different degrees of selectivity in different fields or for different subgroups. The colleges have different institutional needs/goals. At highly selective colleges, the applicant also generally writes different essays, has different interviews, has different ED/RD application status, etc. It should come as no surprise that admission decisions for a particular applicant vary among different colleges.

While a not-top student is unlikely to get a top LoR because of recommender weaknesses, a top student may “lose” a top LoR because of recommender weaknesses, or because they were “rationed out” of their best recommenders due to high school policies mean to avoid overloading recommenders.

However, before applying, the applicant does not know how their LoRs compare to those of other applicants in each college’s applicant pool. Many applicants do not know what their LoRs say in the first place, so it is one of the least visible and least comparable parts of the application, as well as being less under the applicant’s control than other parts. Hence, to applicants, the effect of LoRs appears “random” even though it is far from random when seen from the inside of a college admissions office.

2 Likes

It’s accurate that GPA is a better indicator than test scores.

What many who point this out seem to forget to point out is that the combination of the two is a better indicator than either individually.

Why is “best single indicator” better than “best indicator”?

This is like the argument I see about government working on what they don’t consider to be the #1 priority, as if it’s impossible to tackle more than one problem at a time.

I also suspect very, very few LoR’s fall outside the top 3 categories listed above. And there probably aren’t that many in bucket 3.

1 Like