More Colleges Backing off SAT and ACT Admissions Rule

SATs pretty well correlate with income, so the “monied elites” are not that worried about SAT - they prefer it really (check out the SAT for your basic high-tuition prep school vs. most high performing publics.) Much easier to get (most) monied kids to get a good SAT vs. a good GPA at a competitive school.

Though they help my kids, the reality is the tests are probably not neccessary for any college admissions (and certainly not worth the sturm, drang and cost.) They benefit the wealthy (who can test prep, take multiple times and have schools that "teach to the test.) are a pointless “bridge to nowhere” that take up time from kids that could be used elsewhere, are abused by some admissions (the SAT math in engineering/stem in particular is overused/abused. See CIT, UCs etc.) and are not particularly any more accurate than dozens of other metrics.

But they are a big business with lots of built-in features (NML scholarships, admits etc.)

But I doubt college admissions process would be hurt at all if you got rid of them (and I’m one of those help by stronger SATs than GPA back in the day).

In general, it’s kind of absurd to think that any school, if it’s paying attention, with all the very specfic data points it gets each year, can’t do a fairly decent job of predicting success. Top 100 admit 10-40% of 10-50K applicants and can track their grades through the next 4 years. They all have stats departments. If they wanted, they’d have very precise metrics for what says a kid will thrive.

@CollegeIsBad

It would be awesome if your story of the gardner with the 1580 was true very often, but in reality, the vast majority of the time SAT and ACT protect trust fund Johnny. (and, if it doesn’t “need-aware admission schools will.”) SATs rise with income. It’s an established fact. College admits would be much more “meristocratic” if only GPA, quality of instruction, course rigor, ECs, Essays, ROIs, maybe interview were considered. But don’t expect it soon. Too much invested in the standarized-test-industrial-complex…

https://www.washingtonpost.com/news/wonk/wp/2014/03/05/these-four-charts-show-how-the-sat-favors-the-rich-educated-families/?utm_term=.f974d18bd7f0

@Center

Huh? I don’t think there is much actual, you know, evidence for that. Many schools (which is why they are dumping standardized test scores) claim they have found GPA/rigor/LOR/Essay is a fine predictor (if contexualized properly.)

@Center and @Data10 - Before really commenting, I want to spend some time with the paper linked regarding the predictive power of standardized tests in the context of Ithaca College’s grades. But just an initial observation or three: standardized tests have been validated for decades with respect to GPA prediction with extraordinarily large sample sizes. So, a threshold question might be to examine Ithaca College’s grading system and whether it is meaningfully capturing performance. Relatedly, the grades themselves need to be normed before being thrown into the regression hopper, something that is almost never done. (In simple terms, this means adjusting for course difficulty and trying to account for idiosyncratic effects by transforming the raw data into some statistically valid measures like a d- or z-scores.)

Another problem I often see is referred to as “omitted variable error” (or “omitted variable bias”). Most of these studies are conducted by social scientists who have very weak command of statistical methods. I am not saying that this paper is an example, only that any study of academic achievement that fails to account in its model for intelligence - and most do fail to do so - is going to of necessity conflate all sorts of supposedly explanatory variables (like SES, “first gen,” high school GPA, etc.) with the omitted variable, intelligence. As the SAT, despite its many flaws, is the best standardized measure we’ve got of intelligence in the context of college applicants, that the subject paper eliminates its score data with no significant effect on the prediction is highly suspect. Especially since the substituted dependent variables are themselves so imprecise and noisy. High school GPA, for instance, is worthless without accounting for course selection and rigor - ask any parent with a child at one of the super competitive public magnet high schools, top suburban public high schools or notoriously difficult private high schools like Exeter and Harker. High school GPA has also suffered from relentless grade inflation, and the simple facts that the most common grade in high schools is an A (typically A-) and that something like 50% of high school seniors have cumulative GPAs in the “A range” suggest that restriction of range issues need to be addressed, and of course typically aren’t.

Last, I am always suspicious when hot button issues like “race” and “gender” are highlighted, and differential results on standardized tests are ipso facto evidence of “cultural bias.” A quick look at the paper seems to indicate this is the case here. Again, most researchers in this field do not understand basic concepts like bias, or are not familiar with the literature. It’s widely accepted that standardized tests like the SAT are biased in favor of lower performing groups - that’s true for almost all standardized tests and all lower performing groups. Bias, though, in the testing world is a measure of the degree to which a test score fails to predict accurately subsequent performance. So, for instance, it is known that women test slightly lower than men on mathematical measures, but with narrower variance in the distributions. When men and women are matched for scores, though, subsequently those men perform better on the math tasks that the test was supposed to predict. In that sense, the test “overpredicted” the women’s performance and is therefore “biased” in favor of the lower scoring group. The same goes for whites versus Asians (tests are biased in favor of whites). So, the mere appearance of differential results says nothing about bias.

Well, enough, let me read the paper. I’d love to see other posters’ reactions to the linked paper - especially those posters who understand basic statistics. Thanks @Data10 for sharing the work!

BTW, the linked WaPo article (post #24) is a typical example of the confounding of dummy variables (ethnicity, income, parental education) with intelligence, which is omitted.

@satchelSF I’m interested in the SAT math component. There is not a lot of publically available info, but in looking at college GPAs I was struck that UCSD publishes their engineering GPAs and women, consistently, out perform men.

Since the UCs are prohibited by law from looking at Gender, and since UCSD admits straight to the major and has a low % of women in the majors, I’d assume that women and men have similar math SAT scores (perhaps womens are even a tad lower, as they might have slightly higher HS GPA to offset their structurally lower SAT Math scores.)

What do you think would account for a fairly presistent higher GPA acheivement of women at UCSD?

I have to look at UCSD, @CaliDad2020 - and I will! Could you point me in the right direction? My guess is that there is self-selection, and that the smaller group of women at UCSD is actually higher in math ability than the UCSD men. Do they provide the SAT quantitative breakdown by sex for the specific students? Then perhaps we could test the general observation that tests overpredict subsequent performance for lower scoring groups. It may just be that the smaller group of UCSD women are just better than the larger group of UCSD men.

If you are interested in a fun discussion of sex differences in mathematics at the highest levels of achievement, I recommend La Griffe Du Lion’s work, see here, for example: http://www.lagriffedulion.f2s.com/math.htm. All of his or her stuff is great - politically incorrect but great. It’s a shame the posts stopped. http://www.lagriffedulion.f2s.com/

However, most of these other factors also favor those from wealthier families, who are less likely to be stuck in limited-opportunity situations (high school does not offer advanced courses, cannot afford some types of extracurriculars, counselors and teachers are unfamiliar with helping students go to college other than the local community college, etc.).

Granted, some colleges may have admissions policies to compensate, such as favoring first-generation-to-college applicants, favoring work experience as an EC, and otherwise considering achievement in context of opportunity (i.e. between applicants with similar achievements, seeing that the one who started from a limited-opportunity situation as more meritous than the one who started from a high-opportunity situation). But many may not.

Historically, the SAT was an ability test. How much testing of ability remains in the New SAT that debuted in March 2016 is another question - SAT is no longer an acronym; it stands for literally nothing. I think the major changes to the SAT make it hard to apply older studies correlating SAT (with anything) to the current admissions situation.

It seems likely that the math is too easy, but the math section is more language-heavy than planned (more than double or triple the number of language-heavy questions than intended), which Coleman admitted to, but I haven’t seen any discussion of whether that error was ever fixed (not holding my breath).

It also seems that College Board has ratcheted up the difficulty of the reading section on the New test beginning last August, more than a year after the debut of the New test, making score comparisons a little questionable, in my opinion.

@SatchelSF

Below are the links. I don’t think they break down SAT by gender - but UC admits are not allowed to consider gender in admissions so at minimum SAT would have to be par. (since statistically women have better GPA on average, if Women had to have higher SAT that would be discriminatory, I’d guess.)

Here’s my personal summary of the info. I think it’s correct, but tbh it was just for my own edification so I didn’t triple check:

In 2013/14 UCSD Engineering graduates were 20.9% female (and the women had an average GPA of 3.25 compared to the men graduating with an average of 3.13. A higher percentage of women grads had GPAs over 3.5 as well.)

UCSD Engineering degrees conferred by gender.
2013-14: 20.9% women. Mean GPA Women: 3.25 Men: 3.13
2012-13: 22% women. Mean GPA Women 3.15. Male 3.13.
2011-2012 18% women. Mean GPA women: 3.18. Men 3.13.
2010-2011. 23.3% women. Mean GPA for women: 3.18. Men: 3.11
2009-2010 23.8% women. GPA Mean women: 3.20. Men. 3.17

BTW, 14-15 came out since I put my list together - but shows similar breakdown. GPA (3.23 vs. 3.18)

https://ir.ucsd.edu/undergrad/publications/degrees.html

(fwiw most of the engineering stats seem to end up on P. 10.)

(and thanks for the link above. I’m not sure I’m interested in “acheivment at the highest level” (depending on your definition) but I am interested in how much the structural, built-in SAT Math gender disparity restricts/discourages women’s entrance into engineering and how good a predictor it is for success in the field - so it’s “high level” but probably not “highest level.”

My basic theory, esp in regards UCs since they are legally prohibited from considering gender, yet have a very unique system of admits for engineering (and, for the most part, engineering only) is that UC engineering dept. overweight Math SAT scores, thus depressing # of female admits (even though, statistically female applicants ought to have, over time, a slightly higher GPA.)

The question is if there is an overweighting of Math SAT scores (I have no stats I’ve found to be sure, but that UC Liberal Arts - which have different admissions standards, are over 50% women makes me think the vast disparity (75-25 vs 45-55 in some cases) between colleges within the university may be due, in some part, to the engineering adcoms overweighting Math SAT. )

Then the question is how good the Math SAT is of a predictor for success in the UC engineering program. If women are being admitted to UCSD engineering with slightly lower Math scores (due to slightly higher HS GPA - which is only an assumption based on broad natinal trends. I don’t have the actual numbers) but performing constently better, that has to call into question how good Math SAT is at predicting success at least at UCSD.

But again, this is not my world and I don’t have a lot of data, but I thought it was interesting when I stumbled across it a couple of years ago.

Sometimes SAT scores can be helpful. My son has moved to 3 different school over the last four years(no fault of his-my husband changed jobs). For that reason his GPA looks lower because it gets recalculated at each school with different metrics. His SAT score, however, shows his abilities and strengths. We are hoping in our case that his SAT will be weighed heavily because his extra curricular and recommendations are weaker due to all the moves.

It was intended as an “aptitude” test. However, it is impossible to completely remove environmental factors, like the quality of schools that the test taker attended. Or the gaming that went on back then – when most of the verbal section was vocabulary, high school English teachers would give vocabulary words every week to learn and be tested on.

^My guess is that it’s still impossible to remove those same factors. If anything, College Board now admits that the test can be prepped for, explicitly advertising that a student can increase his score by practicing on Khan.

To the extent that the New test is testing achievement of Common Core standards (as apparently Coleman wanted it to), that relies even more on the quality of the school than the old aptitude test that some of us took in the 1980s.

Aren’t there enough “test optional” admits at enough schools to have good stats on whether those that submit tests with X GPA perform better than those that don’t with the same GPA - at this point you’d be able to even break it down by type of school etc.? It seems so strange, as these schools get excellent data every year, year in and year out, that you wouldn’t have a very good indication whether test optional admits hurt their in-school performance.

A college may have the data, but not announce it publicly because it may see it as a proprietary advantage when competing with other colleges for desirable students.

@ucbalumnus

Certainly, the idea would be that ECs are also evaluated holistically. I meet a lot of kids who have to work decent amount of hours just to help the family along - or at least get gas money for school. That is often (and where it isn’t it should be) considered highly by adcoms.

There are a few reasons schools don’t want/can’t put more economic equality into admissions:

  1. Aid is limited, even at full need schools.
  2. Wealth tends to be generational - so admitting students who come from families of means increases long-term family involvement.
  3. It’s easier for a high income student to graduate on time - making life easier for any given school.

At the end of the day, I’m fairly certain 95% of US colleges could manage admissions with very little trouble without standardized scores. (to those worried about relative difficulty of courses, for instance, I’m not sure there is a lot of evidence that a kid with an A in AP Chem is actually all that better a student than a kid with an A in “regular” Chem or Honors Chem at a school that doesn’t offer AP. I’m no expert but I would bet relative rigor is more accurate a predictor - ie. rigor compared to offered courses, than simply level of course. That should be easy to test as well. Actually, given all the admissions info schools have been gathering for the past 50 years it’s kind amazing we don’t know the answer to these questions down to a gnats eye-lash.)

  • and I was suggesting more in terms of a more general study. I agree many schools are unlikely to give that kind of info unless they can spin it positively somehow. Although if they could make a case that their test-optional kids do just as well as their test-taking kids, might make a lot of test-phobic kids and their parents more interested...)

There is definitely a difference between an A in college prep Chem and an A in AP Chem. Give the AP Chem exam to the college prep class and see how they do.

^^ but is there a big difference in how the kids who took Chem (where no AP options existed) and the kids who took AP Chem do in their first college Chem class?

@lastone03

of course there is a difference in the COURSE, but is there, as @OHMomof2 notes, a difference in what it says about the student and their ability to succeed and thrive in college.

I don’t know enough to tell you what studies say, but my guess is, if the “regular” Chem or Honors Chem was the top class in a particular school, the As in that class are likely on par with AP class “As” at a similar school in terms of indication of ability to succeed and thrive at a given college, which is what we’re trying to measure, I think.

Well, many of the AP kids would place out of the level I Chem class so it would be hard to gauge. The added bonus of passing the AP exam with a 3+ would mean the student frees up some credits to help with graduation requirements.

The unfair thing is that some schools like Stuyvesant and Exeter simply have an academically stronger student body. However, I’m sure that colleges are aware of this and plan accordingly.