More Colleges Backing off SAT and ACT Admissions Rule

@Data10 Thanks for that; very interesting.

But I think it can be argued that other factors held as important have similar “weight” if given the same litmus test. In the 2007 study, if we do the same exercise of removing predictors to estimate their significance with predictors 7+8 (hs workload+demographics) exhibit a 2.6% change.
(this is similar to what you did, but comparing models #20 and #13 in the paper, noting a change from 30.8% to 28.2%. Strangely, the study does not use a similar pair of models where only hs gpa is switched out).

Comparing models with fewer predictors (models #11 vs #1/#2, we see that gpa adds ~9.2% and sat1 w/o writing adds ~7.4%. Gpa is a better guide, but sat is not far behind. Using sat1+writing, we can compare models #13 vs #1/#4 and get gpa adding ~8.2% while sat+writing adds ~8.6%.

It would seem that GPA+SATI+Writing is a good thing, or at least not out in the weeds.

The UCLA study shows that the CIRP questionnaire provided data sufficiently correlated to the SAT1 results that the SAT’s inclusion was not necessary. However, it seems that this would be a subjective set of data that is easily gamed if used as an admissions tool (namely questions asking students to rate themselves, hours spent studying, college plans, students goals and values, student and parent background). Avoiding the issues (bias, gaming, etc) related to such self-reported data leaves us with… testing I’d think… If we consider the ideal of testing to be a bias-free method of assessing college success, then trying to achieve that is just as reasonable as - and more implementable than - trying to achieve such assessment through other means.

It’s noted that for regression analysis regarding people, it’s considered moderately predictive.

I’m not sure if anyone recalls the quote from the Harvard admissions dean on what best predicts college performance at Harvard. He said that AP/IB tests were the best predictors, followed by subject tests, grades, SAT/ACT, with the writing test being as predictive as the subject tests.

This seemingly innocuous quote started the AP frenzy we’re in and I think while the top schools agree on AP’s being a good predictor, they didn’t think students (and parents) would take ten or more APs.

I don’t mean to look down on students with low ACT/SAT scores and high GPA, but there is no way in my mind that a student with a 4.0 GPA in the most rigorous classes cannot score a 30 on the ACT. Either you’ve cheated your way through high school or are an extremely slow test taker (which will come back to bite you in college). By spending $20 on the ACT red book and studying the material for 20 hrs, there is no way someone who is Harvard material will score less than the 90th percentile.

To put it simply, if you can’t do decent on a test like the ACT or SAT, 99% of the time you aren’t ready for academics at an elite university.

@freepariah agreed and it supports the notion that not requiring ACT or SAT allows grade inflation to succeed and paves the way for more preferences to prevail.

However, a small number of selective test-optional or no-test colleges can free ride on any effect of standardized testing resisting grade inflation because most other colleges do require the SAT or ACT.

Kudos to the poster above for noting the restriction of range issue. It wouldn’t occur to even one out of a thousand casual readers of these studies (who are really the target). The College Board in their research series typically adjust for range restriction, but that is not the case with the “fair test” types, whose goal is to eliminate testing because it reveals differences in intelligence.

Another huge methodological flaw that is unnoticed - or only casually mentioned - is collinearity in the data. Simply put, a study like the Ithaca College paper looks at admitted students, each of which presented a HSGPA and SAT score. Presumably, if someone presented a 4.0 GPA but only a 900 SAT he was rejected. Similarly, someone presenting a 1.5 HSGPA was rejected despite a 1500 SAT. These are extremes, but more generally there is going to be some correlation between HSGPA and SAT, because each metric has an input factor of intelligence, which is not measured explicitly but can only be imperfectly inferred.

Therefore, it is not very surprising that when looking at the data, disaggregating each primary component of the admissions decision - in particular GPA, SAT and some amorphous “rigor” score that is assigned by some college admissions office clerk - would not show huge differences in predictive ability. First, these measures are all going to be correlated in the abstract with the omitted causative variable being intelligence. And second, very anomalous idiosyncratic data points (high SAT, low GPA or vice versa) will have already been filtered out by the admissions process.

In order to truly test the predictive ability of a test within a particular college environment, a school would need to admit one cohort based on all the “non-testing” factors and a separate cohort simply on the basis of a test score, and then compare results - preferably first or at latest second year college GPA. (By third year, the students will have self-sorted, with the less intelligent going into the easier majors and vice versa, on average of course.)

This was not done in the Ithaca study, but by now they should have some preliminary data from their first “no test” admissions cohorts, so perhaps we will see something like this in the future, even though presumably the “testing” cohort will be noisy with the other measures. But perhaps not. The study was quite explicit that the reason for going test-optional was to increase applications and enrollment, especially from minorities (who test poorly):

“In 2009, the College decided to strategically position itself for breaking away from the predicted rapid decline of the high school graduate population in Northeast. The strategies laid out include…propos[ing] a test-optional admission policy in order to increase applications not only from its primary markets, but also from more racially diverse communities.” (p.4)

In other words, this is basically a marketing strategy, and kudos to Ithaca for being up front about it. Highly selective institutions like Bowdoin are likely using this approach to increase admissibility of low scoring groups. So, if you happen to be Asian, don’t think of not providing your test scores!

For my part, I would bet the farm that test scores will be the best single predictor of success, because they will be most correlated with intelligence. Of course, smarts alone do not predict success - there are many other factors that go into it. One should view smarts as a necessary but not sufficient condition. Perhaps a hybrid system would be most appropriate in college admissions. A minimum score on standardized testing would be a “first screen” that gets the applicant to the second round, during which HSGPA and the various “soft” measures like leadership, perseverance, special talents, etc. are weighed in order to reach a final decision.

Med schools seem to strictly use a gpa cutoff and then use the MCAT/other aspects to discern differences. I never understood why. It might be because the resultant pool of applicants using MCAT as a cutoff doesn’t provide sufficient diversity (in a personality and interest sense). Perhaps something similar may apply here with SATs?

Law schools (used to anyway) use a formula involving gpa, lsat, and school of origin to determine admission, making it somewhat similar to undergrad admission.

I see a need for tests but I am skeptical of the quality of the current tests, both the New SAT for numerous reasons and ACT for its reliance on tight timing, as demonstrating very meaningful differences between students up above, say, the 95th percentile. Or at least I get the feeling that admissions offices treat small score differences as more meaningful than they really are, although I suppose they have so many applicants that they need to do some arbitrary slicing and dicing, and using scores at least provides some semblance of fairness.

However, those other factors that contribute to HS GPA also contribute college GPA, which is why HS GPA is found to be a stronger correlate to college GPA than SAT or ACT.

Harvard’s admission director did say a few years ago that the strength of predictors was AP scores > SAT subject tests > HS GPA > SAT or ACT. I.e. standardized tests measuring actual academic achievement were stronger predictors than those targeting aptitude or some such that is only a part of the contribution to college success.

@PurpleTitan

I’m happy to see stats. I’ve not seen them. I’d especially like to see stats that look at similar socio-economic schools/students.

My guess is, as I noted, that most schools these days in high-acheiving areas provide lots of AP classes due to community pressure.

But it still means that if 8% more students got a 3 on at least one AP in 2013 than 2003, if your theory is accurate, that 8% of US students should have done better in college in 2014 then they did in 2004. I doubt stats suggest that to be true, but I’m happy to look at any that support that claim.

What is missing from this discussion is the absurdity of the argument that “if a kid who has good GPA can’t just buy a book, study more and get a 30 there is something wrong…”

What is missing is the waste of time and money. IF GPA is as good, or even nearly as good an indicator kids are ALREADY GOING TO SCHOOL! Why add a useless hoop to jump through? Same with APs. Most of the time, for students, it is just “a game” to goose GPA (you can hardly get into a top 5 UC without APs, for instance, due to GPA thresholds)

I know loads of kids who spend the summer in SAT camp, do tutoring etc. Aside from providing a nice “gig economy” job to some Masters and PhD students around town and the Test Industrial Complex, it’s a complete waste, as far as I can tell. I mean, maybe, if you’re Harvard/Stanford/MIT/CIT and need to differentiate between the best of the best of the best, but for your “average” good school? I mean, just look at the SAT/ACT scale for admits.

Brown and many others publish theirs. They take plenty of <700 SAT takers, when they could load up on 800s. You think their taking kids they think will fail?

The time and money wasted on SAT/ACT/AP testing has to be considered. I tend to think it’s make-work.

As notable as this may be, it would be nice to have his corroborating data. He could be relying on some the data we’re scrutinizing here.

Standardized testing is absolutely necessary to differentiate between high schools. A 4.0 at Stuyvesant and a 4.0 at my high school are not comparable at all. The AVERAGE Stuyvesant student will score a 33 on the ACT compared to only 1 or 2 students scoring 33+ at my school- this despite the fact that the grade distributions are similar across both high schools. Do you think colleges should not have a way of taking this into account? @CaliDad2020

AP tests were (and still mostly are) used for a very different purpose than SAT/ACT – to allows colleges a more convenient standardized way to offer advanced placement to entering students who are advanced in a subject.

Of course, the proliferation of lower value (from the point of advanced placement in college) AP tests has clouded that purpose (although some of those lower-value-for-advanced-placement AP tests have induced high schools to offer courses like CS principles that are valuable to high school students in terms of learning).

Absolutely correct.

In my city with 8 public high schools, one has a policy that students can re-take exams as many times as they want - literally - with no cap on their test grade. At another school, more than one honors teacher makes their exams so hard they have to curve them, then limits the number of A’s to less than 50%.

How do you compare kids from these two schools with just a GPA? How do you compare two kids from the first school, one who gets 95’s on each test the first time and the second who takes 2-3 retests to squeak out a 90.

Standardized tests are far, far from “fair” and the cost of expensive SAT / ACT prep classes puts those who can’t pay for them at a disadvantage compared to those who can afford the classes. But all kids can study for these exams and allow a comparision of performance on the same material.

@FreePariah I think they already have plenty of ways of taking it into account, esp for the large number of US HS that they have already seen kids from.

Again, colleges have a VAST WEALTH of real live experiments to analyze. Your average “big” school gets 30k or more apps a year, admits 6k? enrolls 2-3k (or whatever) wash, rinse, repeat. Every. Year. Some since 1890.

They have Adcoms, and regional adcoms and alumni reps and legacy reps and so on and so on…

Again, even top schools don’t take exclusively, or even mostly, top SAT/ACT scoeres. Why? Because they want a majority of their student body to be less well prepared? Or because they know there are other metrics that tell them who will succeed and thrive and contribute more interesting skills and talent than rote test taking to their campus.

Again, if you ask most Adcoms, in private, they will tell you they can select a class perfectly well without test.

Now, homeschool, perhaps a small school that rarely sends kids to college, etc. etc. can benefit from testing. But your basic Stuyvesant, Parochial School, public anywhere kid can be pegged with great accuracy without test scores.

I’d bet, if you wanted to try, you could wipe out the test scores on 90% of the apps and the Adcom could give you the students SAT range within 50-75 points and the ACT range within 2.

Certain kinds of kids consistently get 700-800 SAT scores. A slightly different groups consisently gets 600-700… It’s pretty baked in at this point.

Regarding Ithaca, you only included test score submitters in your range, who tend to have higher test scores than the overall average. And you listed percentiles based on estimated across the full distribution of both test takers and non test takers, rather than just “test takers”, like you mentioned in the quote. The study lists the SAT range as follows for students including in the calculations. The range includes 300s to ~800 on each section, although the bulk were closer to a mean of just under 1200.

Verbal SAT: Range = 370 to 800, Mean = 597, SD = 81
Math SAT: Range = 390 to 790, Mean = 600, SD = 69
Writing SAT: Range = 340 to 800, Mean = 583, SD = 76

The CDS lists a g M+V 25th and 75th percentile in the year before going test optional of 1050 and 1270, which corresponds to 51st and 85th percentile, so half the class from 34% of test takers… that’s restricted somewhat from a random distribution with half the class from 50% of test takers, but far less than the numbers in your post.

While the bulk of Ithaca students do have SATs above the overall national average, a similar statement could be made for GPA, or almost any section of the application used in admissions decisions or used in the study. The trend also occurs in other linked studies that include colleges with wider test score ranges.

@soxdog but almost any college knows this.

College know their applicant schools. And if they don’t they can get the info. It’s not some great mystery.

If you have a 3.7 from St. Albans and good LOR, UWashSTL knows what kind of student you are. If you have a 4.0 from Venice High, good LORs, NYU knows what kind of student you are.

They SEE these kids. Every year. Can watch them for 4 years. I don’t get why folks think that is beyond their ability, but add some also jobable standarized test scores and the sword comes out of the stone.

@ucbalumnus It started that way, but it is moving more and more toward the GPA goosing effect.

You know UCs well. It is impossible to get into competitive UCs without a good # of AP/honors courses. Kids are not taking them exclusively 'cause they’re interested. They are taking them 'cause they know it is the only way to stay in the game.

I know this first hand from many students. Some at my kids’ school take 5-6 a year. And they hate it. And it’s dumb.

@CaliDad2020 how do college admissions officers get all this data about individual high schools? I know that GCs submit a school profile with each application, but it seems like statistics don’t always convey the complete picture. For example, some middle school gifted programs get lumped in with low-performing schools so that the group seems average as a whole. Or there could be a high school with a “haves and have nots” mentality, where some kids have everything handed to them on a silver platter while others struggle to even consistently attend.