Problem with using AP scores is that most students do not have many before 12th grade (and those that do may not have the same ones as others).
Making greater use of AP scores would mean going to a British-type system where your admission is contingent on getting a sufficiently high AP score associated with the AP course you are taking in 12th grade. Or perhaps compressing the college admission process into the summer (AP scores available 7/1, colleges give admission decisions 7/15, admitted students must decide 7/31, start class just a few weeks later).
It depends on a variety of factors including the specific college. Regarding Cornell specifically, prior to COVID-19, Cornell used to state the following on their website about how they used test scores in admission decisions:
First and foremost, we look at your high school record, the rigor of your coursework, your grades, and your rank-in-class (donât worry if your school doesnât rank â thatâs quite common). The personal application you write (essays, extracurriculars, etc.) is also a very important piece of Cornellâs selection process. Standardized testing plays a role, but probably not as much as you think.
âŠ
Standardized test scores are only a small part of your application for admission. Let me say that again, standardized test scores are only a small part of your application for admission. SAT or ACT scores represent one Saturday in your high school careerâŠ
The little information available about the actual decisions seemed to be reasonably consistent with above. For example, several years ago I looked up Cornell admit rate by ACT score among Parchment users (largely self reported) who had a high GPA + high course rigor. There didnât appear to be much difference in admit rate among different scores for applicants with >30 ACT + high GPA + high course rigor. Several other colleges on Parchment did not follow this pattern to the same degree as Cornell, such as Vanderbilt.
The upstate NY HS I attended gets a huge number of applicants to Cornell. Naviance for this HS shows a similar pattern to Parchment above. According to my HSâs Naviance, the majority of Cornell applicants with a 96%+ UW GPA were accepted regardless of SAT score, although the 96+% UW GPA combined with <1300 SAT sample size was very small. There was only a small increase in acceptance rate as score increased among this high GPA group. In contrast, admit rate dropped dramatically for below 95% GPA at all SAT ranges . There were also some applicants were rejected with as high as 99% UW GPA + 1600 SAT. No stat range appeared to guarantee acceptance among applicants from my HS.
I realize self reported makes precision low, different HSs show different patterns, and there is a lot of self selection (lower/higher score kids might be more likely to apply to particular Cornell schools and/or be more/less likely to be hooked). However, the point is that while the available sources are far from ideal, the limited information that is available suggests Cornell admission decisions did not follow scores well. I do not doubt that scores really were only a small part of the application at Cornell prior to COVID, as the Cornell website claimed.
You are assuming that AP tests are âproperly designed objective test[s]â.
As a college professor who has taught students who got credit through AP testing, I question that assertion.
But in complete honesty, I question the deeper assertion that there can ever be a single âproperly designed objective testâ even for a single subject that would work across all groups of students. I mean, I know my own exams donât do it, which is why Iâve shifted as much of my teaching to project-based learning as possibleâbut that requires multiple assessment points.
Which in my utterly unhumble opinion is better, but it moves us away from the convenience that @ucbalumnus (correctly, I would argue) notes is so seductive about tests like the SAT and ACT.
There is a false binary underlying your reasoning hereâyouâre presenting it as an either/or, where an objective test is either used or not used.
Much better, I would argue, is an array of assessments, so that you can get not only the regurgitation of facts but also reasoning ability, and analytical skill, and creativity.
âWaitâmultiple assessments?â you ask. âYou mean like the ones that go into a studentâs GPA?â you inquire. âYes, preciselyâ, I answer, âwhich is why colleges nearly all rely on GPA much more than single-point standardized test scores.â
p.s. What objective test would be necessary for someone going into a career as a novelist or a sculptor? Serious question, that.
Pretty much any profession that does not require a licensing exam would be an answer to the question posed by @jasperfant . For example:
Producing art and literature, as noted above.
Computer software and hardware design.
Engineering under the industrial exemptions from PE licensing.
College faculty.
Journalism.
Many âgeneral businessâ jobs.
Politics.
Obviously, in the course of education for some of the above, someone may encounter tests. But these are not necessarily standardized or objective tests that everyone going into the profession is required to take.
Iâm struggling with how a school can walk away from a data point like standardized test scores. Schools can weigh that data point however they like, but it seems like it would always have some value because there is years of data and itâs standardized.
Perhaps schools are using their own data from past years to get to the same spot? At my kidâs school about 30 people apply to Cornell every year, so they should have a solid database to work off when they look at my Dâs transcript.
I can definitely believe that GPA combined with rigor is a better indicator of potential success, but from what I have read and seen, there is continual, climbing grade inflation that somehow needs to be adjusted for IMO. AP exams with test scores would be suitable it seems but as many have pointed out that is not always available.
From what we have seen this year and last year, I do not think SAT/ACT were playing a massive role, at least not to the extent I would have imagined. T20 colleges have a bar for standardized test scores that is much lower than most think should be required and once you are over that bar, there seems to be a randomness to it. Perhaps it comes down to you were a soccer player and your AO was too, etc.
On CC, many like to say that we only know a small part of the application for certain which is true. But, most kids have a very good idea of how well someone writes, know the teachers and can guess pretty well who will get the best recs, etc. These kids, pre-Covid at least, were around each other for years and for the top kids, there are not that many secrets.
I was told this past weekend by a friend that at her daughterâs HS (among the best public HSs in NJ), practically âeveryoneâ gets an A if s/he showed up for classes this past year. It may be an exageration, but stillâŠ
Yes, SAT started out trying to be primarily an aptitude test, but it has gradually moved away for that objective, at least partly due to the competition from ACT, a decidedly achievement-type test. Achievement tests are much more preppable. As a result, students can improve their test scores much more easily by prepping, and more and more of them gravitated toward ACT over time. With both ACT and newer SAT as achievement tests, thereâs less of a need for other achievement tests such as SAT subject tests.
Some colleges may have been dissatisfied with the current options of US standardized testing for admissions (SAT/ACT not sufficiently predictive/useful for their students, others being required would be too much of a barrier to applicants), rather than being opposed to all standardized tests.
The ârandomnessâ likely involves non-stat factors for which the student has little visibility being an important contributor to admission decisions. This is more than just if you were a soccer player and AO was too. For example, a kid might have some idea if his LOR is good/bad, but I expect they rarely have a good idea about how their LOR compares to others within the national pool of applicants.
Regarding LORs, the Harvard lawsuit analysis found the combination of GC + teacher LOR ratings were the most influential analyzed component of the application⊠meaning that if the GC + teacher LORs were removed from the model, the predictive accuracy decreased by a greater factor than occurred when removing any other analyzed factor. Harvard reader guidelines list the following LOR rating scale. Roughly 3/4 of admits were among the top 1/4 of applicants in LOR ratings. Compared to roughly 1/2 of admits were among the top 1/4 of applicants in AI stats (AI includes GPA + SAT + SAT II subject tests).
1 . Strikingly unusual support. "The best of a career,â âone of the best in many years,â
truly over the top.
2. Very strong support. âOne of the bestâ or âthe best this year.â
3+ Well above average, consistently positive
3. Generally positive, perhaps somewhat neutral or generic
3- Somewhat neutral or slightly negative.
4. Negative or worrisome report.
^That the LoRs are among the most influential in an application isnât surprising. On the one hand, theyâre arguably the most objective elements in some ways. And on the other, few get the top ratings because of their recommendersâ lack of familiarity, time constraints, or reputational concerns.
However, despite their importance, I still donât think LoRs are whatâs driving that ârandomnessâ (I prefer the term âunpredictabilityâ), because the same LoRs were presumably sent to all the schools an applicant applied to, and often, s/he got some mixed results.
LORs were are arbitrary example. At highly selective colleges, there are many other influential contributors to the decision besides just stats + LORs. Considering these many other factors does not mean the decision is ârandomâ, but it may seem âunpredictableâ for the overwhelming of applicants who do not have a good sense of how they compare to the applicant pool on those factors, or often how the colleges use those factors in the admission decisions.
Regarding different results for different colleges, different colleges use different admission systems that emphasize different criteria to different degrees. They also have different applicant pools, different degrees of selectivity in different fields or for different subgroups. The colleges have different institutional needs/goals. At highly selective colleges, the applicant also generally writes different essays, has different interviews, has different ED/RD application status, etc. It should come as no surprise that admission decisions for a particular applicant vary among different colleges.
While a not-top student is unlikely to get a top LoR because of recommender weaknesses, a top student may âloseâ a top LoR because of recommender weaknesses, or because they were ârationed outâ of their best recommenders due to high school policies mean to avoid overloading recommenders.
However, before applying, the applicant does not know how their LoRs compare to those of other applicants in each collegeâs applicant pool. Many applicants do not know what their LoRs say in the first place, so it is one of the least visible and least comparable parts of the application, as well as being less under the applicantâs control than other parts. Hence, to applicants, the effect of LoRs appears ârandomâ even though it is far from random when seen from the inside of a college admissions office.
Itâs accurate that GPA is a better indicator than test scores.
What many who point this out seem to forget to point out is that the combination of the two is a better indicator than either individually.
Why is âbest single indicatorâ better than âbest indicatorâ?
This is like the argument I see about government working on what they donât consider to be the #1 priority, as if itâs impossible to tackle more than one problem at a time.
I also suspect very, very few LoRâs fall outside the top 3 categories listed above. And there probably arenât that many in bucket 3.