UC slams the door on standardized admissions tests, nixing any SAT alternative

The 6-year graduation rate for African American students at Georgia Tech averages in the mid-to-high 70s, according to this report:
https://oue.gatech.edu/sites/default/files/GEORGIA%20TECH%20CCG%20STATUS%20REPORT%202019%20FINAL.pdf

So more than 20% of them didn’t graduate in 6-years. Additionally, not all majors at Georgia Tech are in STEM and not all STEM majors are highly quantitative. Definition of success is always relative.

I agree with you that the problem with low SAT math scores (and other educational issues) lies in the K-12 system. US is one of the most segregated countries educationally as well as economically. How we primarily fund our schools with local property taxes, along with the lack of uniform standards, lies at the root of the problem. IMO, that’s where we should be focusing our energy on.

4 Likes

Of course not. The point is that adequately prepared STEM students come from an extremely wide range of math experiences.


Almost half of current UC STEM graduates were never required to submit test scores.

Here are some of what my college precalculus students say about their high school math preparation (I’m a prof at a regional uni with many students from low performing school districts). They did well in my class but could not place into calculus as freshmen.

  • “teacher went on maternity leave; the sub did not know how to teach the math class and didn’t teach us much”
  • "I had precalculus in high school but we never did any trig "
  • " I had algebra 2 but it was just a review class of algebra 1"

In NJ, the lower performing school districts have very high teacher turnover, especially in STEM. Our state’s schools are more racially segregated than ones in the South. But no legislator wants to do anything about our micro-level school districts, as that would be political suicide.

4 Likes

Income dissimilarities notwithstanding, the UCs certainly have held their own in comparison to Bates by standardized scoring profiles, in which Bates would place between UCI (5th among UCs by this measure) and UCD (6th).

1 Like

The SAT/ACT math section involves rapidly answering a series of basic algebra/geometry/trig multiple choice type questions with few careless errors, in a high pressure situation. It is very different from a college differential equations course, which does not emphasize rapidly answering a series of basic multiple choice questions with few careless errors… or at least it shouldn’t. While there is no doubt a correlation between the two, there are also many exceptions. Some students are excel at one at not the other, leading a weak correlation.

For test optional/blind, the more important question is how much of the information gained by the weak correlation with SAT/ACT test score overlaps with other information that will be reviewed in the test optional application? For example, a kid who consistently aces all math classes, is taking honors/AP/… math, has a glowing LOR from his math teacher(I realize UCs do not look at LORs), excels math/science/engineering ECs/awards out of classroom, … probably also has a good chance of doing well in differential equations. It’s certainly possible to have kids who don’t know basic algebra/geometry slip through the cracks at a less rigorous HS with extreme grade inflation… However, kids who don’t know basic algebra/geometry are far more likely to do relatively poorly on some other portion of the application, which includes far more than just looking at GPA in isolation, so they typically get flagged in a test optional admission system.

Test optional for engineering students is not a new and unknown frontier. For example, WPI is a test blind engineering college. They’ve been test optional since 2008, and chose to switch from test optional to test blind earlier this year, as an 8-year pilot trial. This year’s switch from test optional to blind was endorsed by faculty vote. A summary of their graduation rates and percent engineering major by year are below. Note that when WPI switched to test optional, there was little change in graduation rate or % sticking with engineering majors. Instead the same general trend continued of increased graduation rate as the college became more selective, and more tech majors, as tech became an increasingly hot field compared to alternatives, like management.

Worcester Polytechnic Institute Stats

2005 – Grad Rate = 74%, 86% Engineering + CS, 1% Humanities + Social Science
Test Optional in 2008
2010 – Grad Rate = 80%, 87% Engineering + CS, 1% Humanities + Social Science
2015 – Grad Rate = 85%, 88% Engineering + CS, 1% Humanities + Social Science
2020 – Grad Rate = 89%, 90% Engineering + CS, <1% Humanities + Social Science
Test Blind in 2021

WPI new grad employers also do not seem to have a problem with many of their engineering graduates not submitting test scores, such as having grads who can’t pass the technical portion of interview. Instead their salary numbers are on par with expectations for a college of their selectivity, with the vast majority of graduates successfully finding a quality job soon after graduation. The employer report suggests little change in post grad success rates upon going test optional.

1 Like

And again you ignored the comments about the combination of 33 other analyzed colleges showing a similar pattern as Bates, many of which had lower median family income than kids at UCs. It’s not only Bates that shows the pattern of little difference in GPA or graduation rate between submitters and non-submitters. This is the norm among colleges that are similarly selective for test submitter and non-submitter applicants.

3 Likes

ACT has that reputation, but not SAT, so we need to differentiate between the two. For SAT, a 650 math score means more than “a few careless mistakes”. Would I have preferred a more extensive and challenging problem solving exam instead? Surely. But again I’d bet that most who are against testing would oppose it even more vigorously.

1 Like

ACT involves 60 questions in 60 minutes, so you need to maintain an average pace of 1 question per 1.00 minutes to finish the full test. SAT involves 80 questions in 58 minutes, you to maintain a pace of 1 question per 1.25 to 1.47 minutes, depending on section. While ACT does emphasize answering basic/simple multiple questions quickly more so than SAT, both meet the description well when compared to a college differential equations course.

For example, a UFlorida Differential Equations final is at https://people.clas.ufl.edu/cenzer/files/finalsol.pdf , with solutions. The student has hours to complete 8 questions, none of which are multiple choice. Compared to SAT/ACT, the questions are not simple/basic and require advanced knowledge. I expect most reading this thread could not answer a single question correctly on this UF DE exam, while I expect most would answer the vast majority correct on math SAT/ACT. It’s evaluating something very different from both math SAT and ACT – not just different from ACT.

That was my point in responding to @mtmind’s earlier post that these tests shouldn’t be compared with a college course on differential equations.

In terms of speed of doing math, there’re different schools of thoughts. One school thinks speed is part of basic math skills, because more complicated math problems can’t be solved if one doesn’t immediately see some basic connections. The other school thinks speed is less important, but it generally refers to solving much more difficult problems. These standardized tests involve the most basic math problems, and for fairness if nothing else, they have to be timed.

Morehouse’s website list 13 colleges that support 3-2 engineering. GeorgiaTech and Michigan are among the 13 colleges, as are Alabama, Notre Dame, RPI, USC, and many others. I’m not familiar with Morehouse’s 3+2 program, but other 3+2 programs I am more familiar with all have low participation rates. As a general rule, few students transfer to a different college to pursue 3+2 engineering. I think it is extremely unlikely that 3+2 students at Morehouse compose a large portion of GeorigiaTech’s Black student body.

That said, as INJParent points out, there is a significant difference in graduation rate between Black students at GegoriaTech and other races. Rather than a 3+2 program at Morehouse, I think there are a variety of other underlying contributing factors. One obvious factor is GeorgiaTech considers race and gives a boost for URM applicants. It’s not just a boost in test scores. I expect GeorgiaTech Black students have weaker overall academic credentials – weaker GPA, weaker HS course rigor, weaker LORs, weaker ECs/awards, weaker essays, etc. GeorgiaTech is also a public college and favors in state, which I expect also gives an advantage to URMs more than other races on average.

However, I expect few kids are failing out of GeorgiatTech, so it’s not just a matter of being academically qualified. There are many other reasons kids may not graduate. Another relevant one is finances. Kids who can’t afford paying for college are less likely to persist in to graduation, and I expect Black students at GeorgiaTech are overrepresented among lower income groups . There are also relevant family/social/community… differences on average, such as being less likely to have family and community members who attended a GeorgiatTech like college. Lack of support networks within the college and feelings of not belonging / fitting in, while being in small minority of the student body can also contribute.

This difference in graduation rate is not specific to GeorgiaTech or colleges on Morehouse’s 3+2 engineering list. For example, University of California, shows the following graduation rate gap between Black and White students, in spite of not considering race in admission. There is a significant gap of ~10%, which gets more than cut in half when controlling for Pell, first-gen, and gender.

UC Graduation Rate by Race: Overall
UCB – Black = 83% / White = 93%
UCLA – Black = 81% / White = 93%
UCSB – Black = 78% / White = 87%

UC Graduation Rate by Race: Only non-Pell ,non-first gen, female
UCLA – Black = 93% / White = 97%
UCB – Black = 92% / White = 97%
UCSB – Black = 91% / White = 92%

In short, the lower graduation rate among Black students a complex issue with many contributing factors besides just differences in test scores among Morehouse’s 3+2 engineering students at GeorgiaTech, and one needs a more complex analysis to estimate what factors are primarily driving that difference in graduation rate.

Interesting… I mentioned a specific subset from my HBCU who end up at GT (mostly black, all male, and all in Engineering disciplines, with a majority below your threshold SAT Math score) and you compare the 6 year graduation rate for all African-Americans at GT without a breakdown of the graduation rates by standardized test score (which is what we would really need to compare things). The data provided by some LACs show very little difference in the College graduation rates and GPAs based on those reporting standardized test scores versus those students who do not, but it looks like we will never see those comparisons with UC students.

4 Likes

Graduation rate would no doubt be correlated with test score. A better comparison would be a regression analysis that compares graduation rate among students with similar family income, major, HS GPA, HS course rigor, admission reader ratings, etc. And then see if the score and race contributions to graduate rate remain significant after controlling for such other criteria that admissions would be aware of in a test blind system. I expect many colleges do this type of analysis for internal purposes.

An example of a study that does some of these things is at https://journals.sagepub.com/doi/pdf/10.3102/0013189X20902110 . It found that the racial gap in graduation rate went away after controlling for stats of student + HS attended, gender, poverty, and other factors. Black students were actually more likely to graduate in 6 years than White students after controlling for these factors. A summary is below.

Estimated Degree of Influence in Chance of Student Graduating in 6 Years (Mag/SE)
1 . Student’s HS GPA – 19
2. High School’s Average ACT – 18
3. College’s Average Grad Rate – 10
4. College’s Student/Faculty Ratio – 5
5. College Size – 5 (smaller is better)
6. Neighborhood Poverty Rate – 3 (lower is better)
7. College’s Percent Full Time – 3
8. High School’s Average GPA – 2 (higher is better)
9. Black 2 (Black is associated with higher grad rates, after controls)
10. Female – 2

#. Student’s ACT Score – <1 (lower ACT score was associated with an insignificantly higher chance of graduating, after controls)

The average ACT score of the high school the student attends was one of the most influential evaluated criteria in predicting chance of graduating in 6 years, but the student’s individual ACT score had negligible influence. With the above controls, kids with lower ACT had a slightly (not statistically significant) better chance of graduating in 6 years than kids with higher ACT scores. The author writes:

As measures of individual students’ academic readiness, ACT scores show weak relationships and even negative relationships at the higher achievement levels. The negative slope among students with the highest achievement could result if people are using ACT scores to make decisions about students’ readiness for very rigorous academic programs out of a belief that they are strong indicators of readiness when they are not. Future research might investigate this further. Regardless, there is little evidence that students will have more college success if they work to improve their ACT score because most of the signal from the ACT score seems to represent factors associated with the student’s school rather than the student. In contrast, students’ efforts to improve their HSGPAs would seem to have considerable potential leverage for improving college readiness.

Test scores provide more of a signal at the school level, with school-level average test scores providing additional information about students’ likelihood of graduating above and beyond students’ individual HSGPAs. For judging college readiness, school-average ACT scores would provide a stronger prediction than students’ individual scores. This is consistent with the findings and recommendations in Koretz and Langi (2018) and Bowen et al. (2009). The same pattern is observed with school-average poverty levels (in models that do not control for average ACT scores), which echoes Rothstein’s (2004) findings. These high school effects could result from higher academic standards (e.g., more college-oriented curricula at higher-achieving, higher-SES schools). Yet, they could also represent selection effects. Families with more financial, social, and human capital might select into higher-achieving, higher-SES high schools, either by choice of residence or application, and those families would likely continue to offer support when students are in college. School effects also could come from different peer networks, advising, supplemental experiences, or broader curricular offerings available at schools with more resources.

3 Likes

What criteria will U of C and TO colleges use when the average GPA at preps and affluent community high schools is a 3.7 or 3.8? Spoiler alert: it is getting there very quickly.

Do you have a source for average HS GPA at prep schools and affluent HSs?

My kid’s affluent, large public HS average unweighted core GPA is no where near 3.7 (which I know is just one data point)

1 Like

This seems to really bother you. You have raised grade inflation in prep schools multiple times. Curious why this issue is particularly problematic to you in the context of standardized testing.

I am not a proponent of grade inflation. However, I am not sure it is the problem you think it is. First, the students do vary with how rigorous their courses are, so weighted gpas help sort things out. Second, AOs know the schools the kids come from and consider the students in their school context - meaning the problem of grade inflation muddying the data is one within a school’s students, not between students of different schools. Third, top performing kids in top performing schools have a lot in common in performance and ability, and probably deserve similar grades. Spreading out their grades on a curve and deciding who gets into college based on small differences in performance doesn’t make sense - the high performing students are for the most part equally likely to succeed in college. So either the AOs look elsewhere for data (ECs? LORs?) or the admissions decision is like a lottery among those with similar qualifications. Both paths are defensible.

Standardized testing is a data point, but seriously flawed, subject to manipulation, and doesn’t add much additional information over what the AOs have. I wish it did matter more, based on my kid’s scores, but I get that its utility is marginal at best.

5 Likes

They will use the same criteria they’ve used in the past when 30-40 kids or more from one HS are all NMFs with high test scores. Not all those kids get into UCLA or Berkeley. Not all high GPAs get in and not all high test scores get in. Lest you think I’m anti-test, I’m definitely not. My kids had super high test scores and both don’t like the change. I’ll give UC the benefit of the doubt that they know what they are doing and figure they can always go back to tests if it doesn’t work out.

3 Likes

This article is 11 years old, and shows a rapid increase in grades, particularly at private schools

At the IECA conference, the admissions officers from Emory, Georgia Tech and Oglethorpe discussed the distribution of grades they are seeing from school profiles. At one private high school, 97% of the grades awarded were As and Bs, 2.7% Cs and 0.3% Ds. At another private high school, 75% of the students had weighted cumulative GPAs over 88.5.

1 Like

This first article concerns grade inflation at colleges, not high schools.

The second and third articles rely on study done by the director of the College Board. Even setting aside that fact that the organization pushing for standardized tests may not be the best source regarding the issue, this particular “study” relies on highly questionable assumptions and data, including self-reported survey data from SAT applications.

While I’m not denying that grade inflation exists, it may not exist to the degree you suggest. Also it may not be most concentrated where you think. A 2020 paper from UC Institutional Research and Academic Planning examine the issue in the context of UC admissions. Here is the conclusion:

If grade inflation among potential selective university enrollees were more rampant at more-affluent high schools than at less-affluent schools, then the resulting potential distortion of university admission decisions in favor of higher-income applicants could possibly be cause for concern. Prior research focused on unweighted overall median grade point averages suggests that grade inflation is exaggerated at more- affluent schools. However, this brief shows that the choice of a GPA measure more relevant to selective university admissions decisions – up-weighting honors and college-level courses and focusing on college- preparation sophomore- and junior-year courses – as well as a more-relevant distributional moment (the 96th percentile) shows that grades actually rose slightly more at less-affluent California high schools between 2003 and 2011. Indeed, even static grade gaps between high schools appear to be almost wholly explained by actual educational differences as captured by standardized test scores. This evidence suggests that grade inflation does not provide a growing (or even static) concern with regard to socioeconomic stratification at American universities. https://www.ucop.edu/institutional-research-academic-planning/_files/grade-inflation-in-california-high-schools.pdf

As for your original question about what UC would do in the event of rampant grade inflation at elite schools, the answer is that they will continue to look at the grades in context of the school. See the Eligibility in Local Context Program to understand how this might work. Local guarantee (ELC) | UC Admissions

an interesting read, IMO.

6 Likes

This is a poorly written rebuttal to the College Board’s study that basically comes down to the argument of, sure, the College Board used a massive sample in its analysis of grade inflation, but it wasn’t massive enough. The author then argues that the grades going up as SAT scores went down, across millions of test takers, was irrelevant because the College Board was measuring weighted grades. The author does not provide a shred of evidence that the College Board was actually doing that, he just asserts it.

The final argument of this article is basically that rich kids graduate, so why is anyone complaining about their privilege and inflated grades?