"… Research has shown that grades are the best single predictor of college performance and aren’t as heavily influenced as the standardized exams by income, parent education levels and race.
But the ACT and College Board, which owns the SAT, argue that a combination of grades and test scores is the best overall guide to selecting students who are likely to succeed in college. Using grades without test scores could exacerbate inequities, test officials say, because grade inflation is worse in affluent schools, according to research they have reviewed.
The UC Academic Senate, which sets admissions standards, is expected to issue recommendations on the tests by February, with Cal State to follow. The issue, which has drawn international attention because of the size and prestige of the public university systems, raises several pressing questions. How do students with high grades but low SAT scores actually do in college? What support do they need — and get? Are there drawbacks to relying more heavily on grades?" …
First off, a single anecdote for one student among millions of standardized test taking students is meaningless. No doubt the author could find a 1550+ SAT student who failed/dropped out after Freshman year (I can give names of a couple of my fraternity brothers…).
Implying that “ But the ACT and College Board, which owns the SAT, argue that a combination of grades and test scores is the best overall guide ” is in conflict with an analysis that considered only single data sets is disingenuous. In fact, it’s highly likely that two predictive indicators are more powerful predictors when used in combination.
The entire premise of the article “ Grades vs. SAT scores” implies there is some need to choose. As has been well discussed here, all of the data points in a college application - GPA, test scores, course rigor, extracurriculars, personal traits, essays, etc. - build on each other to form the best view of a student.
The article spends one sentence stating “ The most successful students had both high GPAs and high test scores.“ then multiple paragraphs trying to discount this fact.
In support of the thesis that “outcomes were not so different”, they state
“The first-year GPA was 2.78, a B-, for students with lower scores compared with 3.36, a B+, for those with the highest scores.”
Am I the only one that things that a 3.36 and a 2.78 aren’t “not so different”?
The entire article seems to have a point it wants to make then twists data to support that point. Even when it doesn’t.
For whatever reason, anecdotes resonate better with most readers than just pure data. So even articles that are really about the data will often include an anecdote to keep readers interested.
Of course, the fact that anecdotes resonate better than data also means that one outlier anecdote is more believable to many people than all of the data that shows that the anecdote is an outlier.
As far as the predictive power of HS GPA and SAT scores, both UC and Harvard have noted that HS GPA is more predictive of college performance than SAT scores. Both also found that achievement tests (SAT subject and (for Harvard) AP) were more predictive than SAT scores. But the achievement tests are the non-default tests that many students do not hear about until too late, so colleges may be reluctant to require them for that reason.
The 2019 SAT validation study found FYGPA and HSGPA to have correlations of .53 (.33) and between FYGPA and SAT to have correlations of .51 (.32). The correlations in parentheses are raw correlations, while 0.51 and 0.53 are adjusted to account for the selectivity of the student sample. HSGPA and SAT, when used alone, had nearly identical power for predicting FYGPA. Using both together, had a correlation of .61 (.42), indicating more predictive power than using either one alone.
Among the many missing key controls, they don’t appear to control for college attended, so they are influenced by things like harsher grading at typical publics than privates. I’d expect SAT to be better correlated with applying to and attending more lenient grading privates, largely due to better correlation with income. Furthermore GPA was self reported and had limited granularity. For example, students could check the box for having an “A” HSGPA, but could not say their UW GPA was 3.43. Nearly every study that isn’t sponsored by the CollegeBoard shows much HSGPA is far more predictive of nearly any metric of college success than SAT scores. Some examples are below:
https://journals.sagepub.com/doi/full/10.1177/2332858416670601 10k Kids in CUNY System FY GPA
SAT explains 14% of variance in first year GPA
HS NYS Regents test explains 16% of variance in first year GPA
HS GPA explains 25% of variance in first year GPA
HS GPA + SAT explains 28% of variance in first year GPA
10k Kids in Kentucky Public Colleges FY GPA
SAT explains 16% of variance in first year GPA
HS KCCT test explains 17% of variance in first year GPA
HS GPA explains 32% of variance in first year GPA
HS GPA + SAT explains 34% of variance in first year GPA
https://files.eric.ed.gov/fulltext/ED502858.pdf >50k Kids in UC System: 4 year graduation rate
SAT I explains 4% of variance in graduation rate
GPA explains 7% of variance in graduation rate
GPA + SAT I explains 8% of variance in graduation rate
>50k Kids in UC System: Cumulative GPA
SAT I explains 13.4% of variance in GPA
GPA explains 20.4% of variance in GPA
GPA + SAT I* explains 24.7% of variance in GPA
*Best prediction occurs with <=0 weight given to math SAT unless STEM major
GPA alone did not explain 20.4% of the variance in the UC study. Model 1 did explain 20.4% of the variance, but included the additional variables of parental education, family income, and school API rank. Those last three variables, while having little correlation to unweighted HSGPA, do have significant correlations to SAT scores. If you removed those variables, and looked at unweighted HSGPA alone, you would get a lower percent of the variance explained for GPA.
One can debate about whether it is relevant to control for things like income and see how predictive test scores are among students with similar incomes, minimizing the influence of test scores predicting who is high/low income. However, the first referenced set of studies for CUNY and Kentucky did not control for any of these variables, yet still found HSGPA alone was far more predictive of FYGPA than scores alone. Many others have come to similar conclusions.
In actual admissions at the colleges emphasized on these forums, it’s not a choice among only consider HSGPA in isolation, only consider SAT scores, or consider HSGPA + SAT and nothing else. Instead highly selective colleges often consider things like course rigor, HS context, essays, LORs, out of classroom achievements, how the above fits with planned major, etc. The more additional variables you consider, the more overlap the different criteria has with each other, making any one category add relatively little to the model. Test scores appear to be especially prone to this effect.