Purdue to require test scores again starting with class of 2024

I’m guessing the poster was referring to the quote below, which says there was a lower (0.1 GPA difference) in first year that decreases to negligible 0.03 difference in cumulative GPA at graduation, and non-submitters had a slightly higher graduation rate than submitters? This doesn’t sound like an exception to my earlier comment about “little difference” in cumulative GPA and grad rate.

" applicants who don’t submit scores — who are twice as likely to be low income, students of color, or the first in their family to go to college — have a lower GPA their first year at Wake Forest, but it narrows each subsequent year to a .03 difference by graduation with minimal difference in graduation rates. (Interestingly, students who withheld their scores even graduated at a slightly higher rate, at 90 percent, than those who sent scores, at 87 percent.)"

My guess is large public universities and highly ranked larger private schools use some formula of GPA and/or SAT to make the first cut to wittle down the mountain of applications they receive to a manageable pool for AOs to review. Contrary to the claim by some schools that they “review every complete application holistically”, a friend who is a college President says that few schools do this, and most use GPA/SAT cutoffs to make the first cut.

3 Likes

No I was referring to an earlier paragraph that talks about more significant but also more anecdotal discrepancies at unidentified schools, all of which were post Covid test optional.

Things that make you go “Hmmmm”…

I posted this on the Selingo article thread cited above:

I don’t really have anything to add except that the SAT (or ACT) is just a single data point relating to a candidate. It shouldn’t be given too much weight, but I don’t think it’s bad that the testing requirement is being re-instated.

There is a reference to an unnamed college that had a small 0.1 difference in first year GPA between submitters and non-submitters. This is the same difference in first year GPA that occurred in the earlier Wake Forest example, which decreased to a negligible 0.03 difference in cumulative GPA by graduation. I didn’t see any other specific numbers that were relevant to the comment.

I think this is the section from the Selingo article that is being referenced: “At one top-ranked liberal arts college, where 60 percent of the students who enrolled last year submitted scores, the admissions dean told me that the average first-year GPA for members of the freshman class that submitted scores was 3.57; for non-submitters it was 3.47. “Institutional research tells me the difference is statistically significant,” he said.”

IMO, this section is more compelling: "Schmill and Peterson came to that CUAFA meeting armed with data. To determine whether to keep the tests, MIT had taken a different approach from its peers: Rather than continuing to experiment on current classes, Schmill and his admissions team chose to look backward at historical data the school had been collecting on students since the early aughts. “We have 20 years of data where we fiddled with different levers over time,” Peterson told me.

One of those levers was the SAT itself — specifically, the range of MIT students’ scores today compared to 20 years ago. In the fall of 2020, the last year tests were required, the middle 50 percent of MIT first-years scored between 780 and 800 (out of a possible 800) on the math section. That means the top 25 percent of the class scored a perfect 800 and the bottom 25 percent scored a 780 or below with none scoring below 700. (To put these numbers in perspective: If a student missed two math questions out of the 58 on most versions of the SAT, they’d score a 770, putting them below 75 percent of the first-year class at MIT.) But in the early aughts, MIT admitted students with a wider range of scores: About a tenth of first-years scored between a 600 and 699 on the SAT math section, according to MIT’s archived Common Data Set. “They did not do well,” Peterson said. Graduation rates at the time hovered just above 90 percent, high for most colleges but not good enough for MIT.

Schmill didn’t publicly release any of the data he shared with the committee, nor would he show it to me, a stipulation he made when I approached him for this article in April. He was also reluctant to describe how this data broke down across demographic groups. (In late October, the Supreme Court would begin to hear oral arguments for Students for Fair Admissions v. Harvard, a lawsuit that used, among other factors, Asian American students’ test scores to argue against affirmative action, and nearly every admissions dean I spoke to throughout the summer and fall worried about speaking publicly about how race factored into their decision-making.) But to get a sense of what the committee saw — and what made Schmill argue in his blog post that requiring a test score supports greater diversity rather than working against it — I dug into historical data on retention and graduation rates by ethnicity in MIT’s institutional research pages. Here’s what I found: 88 percent of Hispanic students who entered in the fall of 2006 (when 13 percent of MIT first-years scored between a 600 and 699 on the SAT math section) graduated within six years. Black students who started in the fall of 2006 had an 84 percent graduation rate, the lowest among any demographic group except “American Indian/Alaska Native.” Over the following years, as MIT reduced the percentage of students it enrolled with SAT math scores between 600 and 700, the overall percentage of Black students stayed relatively steady while the percentage of Hispanic students rose. But the graduation rates for both groups started to inch up with each eclipsing the 90 percent mark by 2013.

Schmill’s definition of equity in admissions, he told me, is “not all about who comes in the door but also who goes out.” Every year, MIT sees applications from students who didn’t take rigorous math and science courses in high school — many of whom are minority or low income — and without test scores, Schmill said, admissions officers risk accepting students less likely to make it to graduation. For students who applied without test scores the past two years, admissions officers looked for other evidence of math achievement, such as Advanced Placement tests, International Baccalaureate courses, or American Mathematics Competitions. Without any of those data points, the likely result was a rejection, but access to those assessments is even more closely tied to wealth than performance on the SAT or ACT."

They do not control for the criteria that is used to admit test optional applicants. For example, suppose a highly selective college admits some B average hooked kids with weaker scores, weaker course rigor, weaker LORs, and a general weaker application. They then find that hooked B students were not as academically successful as the average kid. This is not good proof that test optional won’t work since everything was weaker – not just scores.

More relevant would be to compare kids who would be admitted as test optional, rather than just compare students with high scores to students with low scores in isolation. For example, what was the outcome for kids with 600-699 SAT who were A students, with high course rigor in post calc math, glowing LOR from math teacher, ECs/awards/AMC in math related activities out of classroom, etc? Those are the kids who I’d expect to be admitted test optional at a college like MIT, rather than the average applicant with a 600-699 SAT.

1 Like

That seems basically like a long-winded way of saying that students who could not do SAT math well enough to score in the top end of the range tended to have difficulty with the MIT math courses that all students are required to take.

Something like that may not be applicable to other colleges which do not have as heavy math or math-based general education requirements, or may only be applicable to math-heavy majors at other colleges (a University of Oregon study found that SAT math scores were predictive of how well math and physics majors did, but SAT scores otherwise had little predictive power).

1 Like

But in this case, Mitch followed the data and the advisors, not the political winds.

I was bit surprised to see this - I thought it was previously announced. I guess I had just assumed what seemed what was clearly going to happen.

1 Like

Are there a lot of these kids? I can see a 600-699 EBRW for kids who are amazing scholars in English and humanities but who didn’t prep for the trick questions / SAT grammar, but don’t most kids who are super strong in math score at least 700 on the math portion?

2 Likes

Probably not, which suggests there probably aren’t going to be a lot of 600-699 math SAT kids who would be admitted to MIT under a test optional system. Test optional at highly selective college does not mean there will be a large portion of students with scores hundreds of points lower than average. It often more means admitting kids with somewhat lower scores than would be predicted by the rest of their application. For example, MIT might admit a kid who made some careless errors and scored a low 700s (in an easy year test).

At MIT, I’d expect test optional admits are more likely to do poorly on the verbal section than math. This can includes kids who speak English as a 2nd language, which may slow them down.

This thread is moving fast, but some test proponents cite this CollegeBoard data as support, here. Findings show that HS GPA plus test score had a .61 correlation with first year college GPA, while HS GPA had a .53 correlation. That is a 15% greater correlation for HS GPA plus test score.

Summit test prep’s co-founder cited this CB research in his recent article which disputes that test optional/test blind necessarily leads to a more diverse student population and/or a greater proportion of Pell grant students (which is often cited by Test Optional/blind proponents). This article ends up comparing MIT and WPI across several access factors, and is quite critical of Andrew Palumbo, WPI’s head of admissions who is a staunch advocate for test blind/optional Bitter Irony in Test-Blind Admissions

Other schools that have published their test optional data showing little difference in graduation college GPAs, and grad and/or retention rates include Ithaca, Mount Holyoke, and Dickinson, links here.

Interesting, I think grade inflation is a particular issue, although maybe after the last of the Covid affected classes graduates next year, somewhat less of an issue. The Purdue PowerPoint mentions grade inflation as an issue for TO admissions, particularly because grade inflation has a wealth disparity. The uc study found high school gpa’s reliability as a predicator of freshman grades was declining in recent years, presumably also due to grade inflation, although no reason was discussed.

2 Likes

At test optional colleges, test optional admits average lower income and larger portion URMs than test submitter admits. I am not aware of any exceptions to this rule across dozens of colleges that have published relevant stats.

However, this does not mean that switching to test optional/blind will always lead to more diverse student population or greater proportion of Pell grant kids. A selective, private college can choose to make their class whatever portion of URMs and Pell grant kids that their goals and internal rules support, regardless of test optional/blind status. They can directly control degree of preference for these groups in admissions, indirectly control degree of preference for criteria that is correlated with these groups, favor recruiting particular groups, or make changes that increases number of applications from particular groups.

The reasons why MIT and WPI have different degrees of URMs or Pell grant kids goes far beyond test optional. Looking at a less loaded and more obvious example, WPI admits are 37% female, while MIT admits are 51% female. It’s been well established that women are more likely to be admitted test optional than males since women average higher HS GPA with similar or lower scores, so why are there far more women at test required MIT than at test optional/blind WPI? I expect the answer is MIT wants to maintain a 50/50 gender balance and is selective enough to do so without notably compromising quality. WPI does not apply the same degree of gender preference… If a selective college wants to maintain a 50/50 male/female balance, they can do so regardless of test optional polices, just as a selective college can maintain a specific portion of URM or Pell grant, regardless of test optional policies.

1 Like

Compression at the top of the HS GPA scale, perhaps due to:

  • general increase in competitiveness for UC admission
  • grade inflation
  • UC admission pre-COVID-19 heavier weighting of HS GPA over SAT/ACT (based on UC studies from a few decades ago)

likely weakens the relationship between HS GPA and college GPA.

I think we’ve well established that scores + HS GPA is better than HS GPA alone for predicting highest possible freshman GPA. However, none of the colleges discussed in this thread admit test optional applicants by HS GPA in isolation. They consider other criteria when determining who to admit and estimating whether the applicant is likely to be successful at the college. Having a class with the highest possible freshman GPA prior to effects of a curve is also not their top priority in creating a class.

More relevant is how academically successful are kids who would be admitted test optional compared to kids who would be admitted test required. Academically successful includes more than just having a slightly higher freshman GPA.

As an example, the previously linked Ithaca study found the following. SAT added little beyond the combination of GPA + HS course rigor + AP count + Demographics in predicting cumulative GPA. The combination of course rigor + AP count largely overlaps with what SAT seems to add in this example. A college that considers a measure of course rigor in addition to GPA has less benefit from requiring scores than a college that only considers HS GPA in isolation.

First Gen + URM + Gender – Explains 8% of Variance in Cum GPA
Demographics + SAT Score-- Explains 25% of Variance in Cum GPA
Demographics + GPA + HS Course Rigor + AP Count – Explains 43% of Variance
Demographics + GPA + HS Rigor + AP Count + SAT – Explains 44% of Variance

Let’s not forget that Covid has also made HS grades much less predictable. For example, at UC Berkeley, demand for introductory/remedial math and English classes is at record high, according to its vice-provost for undergraduate education.

2 Likes