Why colleges are reconsidering their reliance on standardized tests for admission

@ucbalumnus well the stated goal is to use this new policy to entice a certain type of applicant. They explicitly stated a reason why they are going TO. I don’t think I’m reading something that’s not there. Someone from a top high school is not who they are targeting with the policy.

To my mind, there’s something wrong with 47% of the nation’s high school graduates having an A- or better GPA. See, e.g., http://talk.qa.collegeconfidential.com/high-school-life/2005522-as-on-the-rise-in-u-s-report-cards-but-sats-flounder-p1.html. To me, that really diminishes the value of GPA, and it reinforces the need for some kind of nationally standardized measure. While studies show that GPA is a better predictor than SAT/ACT scores alone, they also show that the combination of GPA plus standardized testing is an even better measure.

Given the high percentage of A students, A students will have a pretty wide range of SAT/ACT scores. For example, per College Board tables, a 53%ile score on a national scale is about a 1030. A 1030 student in my mind is likely to be a different student than say a 1400 kid. That said, I do think scores need to be viewed with an eye towards considering a student’s individual circumstances, and that’s where holistic admissions comes in, but that’s a whole other can of worms.

Notwithstanding my comments above, I have no issue with test optional policies. I understand and appreciate the goal, and it seems we are moving that way anyway. But dispensing with standardized testing altogether feels a little bit like throwing out the baby with the bath water.

States frequently use tests to determine accurate progression through grade school all the way to junior year of high school. CA is one state which does this. I’ve always wondered why those test scores aren’t even requested in the CA public university applications. that would be one way to Eli,in are the SAT/ACT for in-state applicants.

Sure, and again different by school. Some allow coaches to offer ‘soft-support’ to a candidate whereas some don’t…it all depends on how important sports are to a given school when building a class. Staying with D3 examples, obviously Williams, with across the board, long-term strength in sports weights sports recruits/sports ECs differently than Bowdoin does or U Chicago or Case Western, to name just a few.

@diegodavis Interesting point. I can only speculate that the issue is something you alluded to, i.e., it’s testing that would be taken only by in-state applicants, and thus not a common measure for all applicants. I would add that students in private schools, parochial schools, and home school would also lack this testing (but would likely take something similar like ERBs).

A couple of random thoughts on the general topic. With regard to testing optional decisions, I wonder how well schools could assess students who don’t submit standardized testing if standardized testing was not the norm. For example, I think there’s a good argument to be made that a student with high grades in challenging classes from a school whose students have high standardized testing results (in whatever form) is as strong a candidate as anyone else at that school that profiles similarly in terms of grades and course rigor regardless of their individual testing scores. But without a benchmark for the school, I would imagine admissions offices would have a much harder time making an assessment as to that student’s ability to thrive at a particular college.

In a similar vein, with respect to studies assessing the value of GPA v the value of standardized testing, it seems to me there is a risk of extending the results beyond what the studies show. While the studies may provide insight into the value of GPA v standardized testing for predicting first year grades in a school or group of schools where the students have already been selected based in part on GPA and in part on standardized test scores (among other things), it seems wrong to extend that reasoning to the notion that a student with a 4.0 high school GPA would do equally well in colleges where the student bodies have very different ranges of standardized test scores.

I’m definitely not someone who thinks that test scores are the end all, be all–I personally favor holistic admissions–but it seems to that we will need some kind of standardized measure going forward to put grades into context.

Not being the primary target of a policy does not prevent someone from using the policy.

Really need to stop equating intelligence with SAT/ACT scores (they do not have a strong correlation). Also need to stop with the American system being a academically merit based system, it is not, at least at the top echelon of colleges. They are holistic which means a lot of different things, but which include intrinsic abilities, many which are not directly associated with academics. The top colleges know which high schools have grade inflation and which are more rigorous. Grade inflation in high school has a lot to do with AP/honors courses being worth more than an A on grading scales, not necessarily that half the students in the US are averaging an A- on a true 4.0 scale. GPA’s naturally sort themselves out within the varying conditions/opportunities that students have at each high school, which is not true for standardized testing. Students who achieve 4.0’s at high schools with little support/opportunities should be viewed with the same eye that students who come from wealthy schools with tons of support/opportunities.

It sounds like you are guessing. I have not seen evidence of the idea that Bates is rarely admitting test optional applicants unless part of a special hook group. Bates has been test optional since the 1980s, for ~35 years. Overall Bates athletic performance didn’t notably improve after going test optional, although they did have a good spell in cross country.

There are many studies. I’ll continue with the Bates example, and link to their 25 year summary at https://www.bates.edu/admission/files/2014/01/25th-Year-SAT-report-Stanford-6.3.11-wch.ppt . Some specific stats comparing matriculants who submit scores to those who do not. There was little difference in GPA or graduation rate performance, but there were statistically significant difference in gender, race, majors and career paths.

Mean SAT Score: Submitters: ~620/~620 , Non-Submitters: ~540/~535
Gender: Submitters = 48% Female, Non-submitters =59% Female
Race: Submitters = 3% URM* , Non-submitters = 5% URM*
*Not counting Asian as URM

Mean Graduation Rate: Submitters = 89%, Non-Submitters = 89%
Mean College GPA: Submitters = 3.16, Non-Submitters = 3.12
Natural Science Major: Submitters = 23%, Non-Submitters = 17%
Humanities Major: Submitters = 31%, Non-Submitters = 28%

Career Outcomes:
Submitters overrepresented among doctors, lawyers, writers, and tech
Non-submitters overrepresented among finance, arts, and secondary school teachers

Not surprising that test score submitters are better represented among doctors and lawyers, professions gated by education that is gated by important standardized tests. But it is somewhat surprising that non submitters are better represented in finance, where employers are said to ask applicants for their high school SAT scores.

Isn’t ACT/SAT scores part of how colleges decide a high school is rigorous?

Here are some more studies, linked to by DePaul, who went TO in 2015 after much research. Lots of other info one can access on the left hand menu as well.

https://offices.depaul.edu/enrollment-management/test-optional/Pages/outcomes-from-adopting-institutions.aspx

It will be interesting to see what the UC system decides re: TO in Feb/March 2020 time frame…many expect the faculty will recommend TO and the Trustees will support that, but time will tell.

@homerdog, I’m not sure where you’re getting that idea. Both of my kids applied to Bates without scores and were accepted. Both attended private schools, neither was on financial aid, and neither was a recruited athlete.

The one part you did get right is that one of the goals some schools, including Bates, have expressed in going test optional has been to increase applications from black and Latino students.
https://www.insidehighered.com/news/2018/04/27/large-study-finds-colleges-go-test-optional-become-more-diverse-and-maintain

I think the biggest thing behind GPA verification is identifying some kind of inequality (socioeconomic comes to mind).

For example, my mediocre GPA just shy of 3.7 would warrant a lower ACT score, but really, from the rigor of my classes, I was extremely well prepared for the ACT. I am also below the Illinois poverty line; this seems to support the fact that these tests are to gauge rigor and not socioeconomic, BUT, my public high school is also located in a very affluent, suburban neighborhood.

This “rigor verification” is something I noticed more as I applied through Questbridge, where the average applicant has a 3.9+ GPA but their SAT/ACT range from 1310-1450 and 28-33 respectively. When also looking at the average AP scores, they hover around a solid 3 - of course, underprivilege correlates to living in an underprivileged area = not as great a high school = subaverage test preparation.

I think this is also why the SAT introduced the “subscore” based on a test taker’s socioeconomic status and privilege. Clearly, it’s not just to check for grade inflation/deflation but also for inequality with respect to resources/preparation.

A test optional policy rarely impacts 34 vs 36 since both of those students are likely to submit scores. A test optional policy also doesn’t change the decision for the typical 1030 student vs the typical 1400 student since typical students have rest of applications that are consistent with their scores. Instead test optional might be relevant to the minority of students whose rest of the application is inconsistent with their scores.

At selective test optional colleges, this rest of application usually includes transcript (GPA + course rigor + …), LORs, essays, and ECs/awards, among other factors. Test optional would be more likely to make a difference for the kid with great grades, impressive course rigor with AP classes, great LORs, great essays, impressive ECs/awards, … ; but not great scores. Then it becomes a question of why the scores were inconsistent with the rest of the application. It might relate to difficulties on that particular day (anxiety, sick, …). It might relate to English as 2nd language. It might relate to not being able to stay focused for a multi-hour test and/or learning disabilities. Lack of prep might contribute. As others have said, grade inflation might contribute. … The many possibilities make it difficult to say whether college performance may be impacted by the inconsistent score, rather than following expectation based on the rest of the application.

The studies I’ve seen that control for both GPA and a measure of course rigor usually find that test scores do not add much to the prediction. The more additional factors are included, the less test scores contribute beyond those criteria. However, the predictive ability is usually poor with any combination of GPA+scores, only explaining a small portion of variance in any metric of college performance.

For example, the DARCU study at https://heri.ucla.edu/DARCU/CompletingCollege2011.pdf uses a huge number of factors and found that they could explain 26.9% of variation in 6-year graduation rate with their full model vs 26.8% of variation with their full model and SAT scores excluded. With just self-reported average GPA they could explain 13.5% of variance vs 16.8% with just average GPA + scores. The scores appeared to add something beyond just average GPA, but nothing of significance beyond the full controls.

@Data10 Thanks for sharing. On a quick scan, what struck me as interesting (insofar as it relates to the issue at hand) is the manner in which the data is presented in Tables 5, 6, and 11. Having A/A+ in top entry of Table 5 makes sense, as not all schools have A+ as a grade option, but it makes less sense to me for the top entry of Table 6 to be 1300+, as opposed 100-point bands like the entries below it. Following the same trend as the rest of the data in the table, I would assume the numbers would continue to increase for each 100-point band up to 1600. And if that’s true, wouldn’t those tables suggest that, in those ranges, SAT is more meaningful predictor of graduation rates than GPA (which caps out at A/A+)? As for Table 11, it provides data without SAT but the corresponding data without GPA.

@sue22 I’m glad to hear that Bates doesn’t only use TO for underrepresented students. I suppose it was just general hearsay that kids from wealthier circumstances need to submit scores. I also read the U Chicago article and our GCs have told the kids at our high school that Chicago would expect their scores, that their policy is not for them.

And schools (even Bates) specifically calls out that their policy will help them recruit URMs and first gens so I’m sure that’s who they target. Obviously, the TO policy applies to all and, if the school really wants the student, they will be ok with him not including a score on his application.

But what the schools are putting out there in writing does not say they are looking for all kinds of kids who might have had a hard time testing. What they tell us is that they are TO so that they can accept URMs and first gens. I’m just taking them at their word. They know that test scores can be a reflection of SES and don’t want to hold that against a student.

On the admissions page about their test optional policy Bates provides a link to their 2011 report on TO. In it you’ll find this:

@Sue22 ok. So some of the kids in that list above have a hook for a school like Bates - immigrants, rural or blue collar students (read first gen or geographical diversity),students with a spike, and athletes. Women? That’s a weird demographic to call out as using TO more. Wonder why that is. As for the five to one ratio? That’s because a LAC in Maine just isn’t getting interest from that many students of color. It’s not because the school would not like TO to be a way to recruit more URMs.

What I’m trying to say is that, if a student comes from a very competitive high school, then colleges expect scores. If your kids were accepted from a boarding school then I assume they had stellar everything else and maybe a hook too?

They chose to print a table with the same number of rows for both GPA and SAT, so the two metrics would display nicely side by side, such as in table 7. However, that does not mean that they didn’t consider SAT scores above 1300 when calculating variance explained. They considered the actual score of the student including the full range of scores and converted ACT scores to SAT for students who took SAT. For example, a student who scored a 34 ACT would be treated as SAT = 1550, while student who scored a 35 ACT would be treated as SAT = 1590.

It’s well established that GPA alone is a better predictor of nearly any college success metric than SAT alone. An example study that does divide by each individually is at https://journals.sagepub.com/doi/full/10.1177/2332858416670601 . Specific numbers are below.

CUNY System FY GPA
SAT explains 14% of variance in first year GPA
HS NYS Regents test explains 16% of variance in first year GPA
HS GPA explains 25% of variance in first year GPA
HS GPA + SAT explains 28% of variance in first year GPA

Kentucky Public Colleges FY GPA
SAT explains 16% of variance in first year GPA
HS KCCT test explains 17% of variance in first year GPA
HS GPA explains 32% of variance in first year GPA
HS GPA + SAT explains 34% of variance in first year GPA

A similar pattern occurs for the University of California studies at https://files.eric.ed.gov/fulltext/ED502858.pdf

UC System: 4 year graduation rate
SAT I explains 4% of variance in graduation rate
GPA explains 7% of variance in graduation rate
GPA + SAT I explains 8% of variance in graduation rate

UC System: Cumulative GPA
SAT I explains 13.4% of variance in GPA
GPA explains 20.4% of variance in GPA
GPA + SAT I* explains 24.7% of variance in GPA
*Best prediction occurs with <=0 weight given to math SAT unless STEM major

I think we’re talking about slightly different things. I did not in any way imply at that the scores above 1300 were not considered. I’m talking about the manner in which the data was presented.

The issue I’m raising is about the top end, and it’s relevant in part because of the high number of high school GPAs in the A or A+ range. Because of the increasingly common A average, the value of using GPA seems to top off at a certain point (e.g., it gets one to 58.3% for the 4-year graduation measurement), whereas it seems reasonable to assume that the SAT scores would continue to have some predictive value beyond 58.3%, i.e., it would follow same trend as shown in the 100-point bands below the 1300 line in the table. Given that the 1300+ group reflects 62.2% 4-graduation rate, higher than the A/A+ band, that seems like a not unreasonable conclusion.

Please understand, and I’ve stated this before, I’m not against testing optional policies. I appreciate the reason for them, but I don’t think it logically follows that ALL standardized testing should be jettisoned. As far as I’m aware, every study on the subject has concluded that GPA plus standardized testing provides better predictive results than GPA alone.

On a side note, the other data in the study was a bit disheartening. I was first gen college student who went on to obtain a couple of professional degrees, and I wish the numbers were better than reported.