Test Optional Admission Data

Do you have any data that demonstrates non test submitters graduate at statistically significant lower rates than those that submit?

2 Likes

What does average/aggregate graduation rate really mean for a college with some majors that are nearly impossible to fail? Do we have graduation rates breakdown by major?

At a lot of super selective schools you aren’t admitted by major so if you find a particular major too difficult, you can easily change to something else. Those are the same schools where nearly all students graduate. I guess you could attempt to measure how many kids that started out in a hard major, like Engineering, went on to get a degree in that field (as opposed to graduating with another major). Unfortunately, outside of top ranked schools, it is all too easy for students to fail to complete their degree regardless of major. If we are really concerned about student competency, we should stop worrying about what admissions are like at the top 50 schools, where there is an abundance of talent with or without TO policies, and start thinking about how we can get kids at much less highly regarded schools to finish their degrees.

3 Likes

Agree with this. This thread though is going down the usual path of what different people feel is “fair” in selective college admissions. Looking at graduation rates of T20 to even T50 is IMO not very insightful as the differences in academic ability, especially using a low bar of 6 year graduation rates, are meaningless among students who have the GPA/transcript, EC’s, LoR’s, essays to gain admittance with or without test scores. There are certainly less rigorous majors in these schools students can either initially choose or drop down to. Proof of how meaningless it is to use graduation rates of test takers vs non test takers to support the supposition that both group of students are equally academically qualified can be seen in the graduation rates of recruited athletes. It is fair to assume that as a group, recruited athletes at selective schools have lower academic qualifications (grades, test scores, etc…) than the general admitted class, with some recruits even at 2.5 or greater standard deviation from the general class in AI for Ivies. Indeed we know that only a small fraction of recruited athletes at Harvard would have been admitted but for being recruited athletes per the litigation. Yet the the graduation rate for Ivy athletes is at and in some cases higher than the overall graduation rate for each school. Failure to graduate at a T20 is not about intellectual inability to pass, but more likely personal or family problems, financial, mental health, addiction, etc… or even grabbing an entrepreneurial opportunity.

To me the rest of the debate is about the effect of removing data points using standardized test scores. Test scores certainly reflect various biases. But the biases are pretty well known and have been historically accounted for. So in the zero sum gain of selective college admissions, let’s look at who benefits and who is disadvantaged if test scores are not a mandatory part of the applicant’s assessment.

  • Advantaged: students who otherwise have a great file but don’t test to their desired standard. Could be any SES class

  • Advantaged: students who have less access to testing resources, cannot afford multiple testing where testing/testing scores are a barrier to even applying. Mostly applies to lower SES students

  • Disadvantaged: high scoring students, especially those who took the test once, those from disadvantaged backgrounds and those who might not have had the best GPA/transcript (classic school underachiever, someone with a bad year, someone who has family/other burdens that affect school performance). More so in test blind vs TO regime

  • Most advantaged: schools that want to increase their pool of applicants and more subjectively shape their class.

11 Likes

Interesting thoughts, but I don’t think graduation rates are being offered to show that “test takers vs. non test takers” are “equally academically qualified.” I think the rates are being offered to show that students from both groups are adequately academically qualified. In other words the graduation rates suggest that about the same percentage from students from each groups are adequately prepared to succeed at the colleges in question.

If we consider the issue on those terms, then the lesson learned from your Ivy athletes example may be that the range of academically qualified students is much wider (by “2.5 or greater standard deviation”) than might be suggested by the general admissions profile. And that the graduation rate is about about “intellectual inability to pass,” but the requisite “intellectual ability to pass” (as measured by grades and test scores) is a lower standard than many would assume.

Schools are looking to encourage a broader range of applicants whose qualifications more closely align with the schools’ educational missions. They are “advantaged” to the extent that TO/TB policies accomplish this.

But this approach isn’t necessarily any more or less “subjective” than would be heavily relying on test scores in the admissions process. Just as it is for applicants, the schools are looking for “fit.” In many cases TO/TB policies may increase a school’s ability to find candidates who fit with their broader mission.

1 Like

Yes, but if so, it is a relatively low bar. I do think that some of the posts upthread are coming from a point of view of using graduation rates to support overall academic equivalence vs just ability to pass/graduate. I may be misinterpreting their thrust, but where people get their danders up on CC is whether the most “meritorious” students get in (however they define that).

I think also to avoid in some cases critics (and even lawsuits) that have pointed to differences in test scores of groups. We may have different opinions of whether tests measure anything useful, but it is the one piece of data that is scored on the number of right answers of a test that is the same for all test takers for that date vs a more qualitative assessment of things like essays, LoR’s, EC’s and even judgements made on rigor and potential grade inflation/deflation.

It is behind a paywall (I think), but it is a really good discussion of the true impact of removing tests, and how fundamentally disingenuous the argument that it is being done to help URM students is.

The writer makes a powerful argument that dramatically reducing acceptance rates at the top U of C schools is not the best way to advocate for URM students. I have made this point in this thread and others. Since U of C went TO, URM percentages actually dropped at some campuses, including Merced.

The author cites another study done in 2018 to assess whether TO actually increased diversity at colleges:

In terms of racial diversity, the percentage of freshmen students of color did not change in either direction for liberal arts colleges after making the switch to test-optional admissions. In fact, we find that test-requiring institutions increased student diversity to the same degree as that of test-optional institutions. This result contradicts one of the often stated justifications institutions provide for implementing a test-optional policy, which is to diversify the student body. Our analysis suggests that institutions should not rely on a test-optional approach to admissions as a means to increasing the racial diversity of the student body. … Furthermore, this result suggests that the motivation for adopting a test-optional policy is not to diversify the student body, since student diversification appears to be related more to an institution’s desire to do so.

1 Like

Well said.

Question: Colleges publish their median 50% SAT range. Do they do the same with UW High School GPA?

The UCs are test blind, not test optional. They are also race blind.

The UCs (and any school) have very little control over their declining acceptance rates.

I do think the author raises a number of good points. Should UCLA be proud that their acceptance rate is 10%? That means access is getting harder for in-state students of any race, not easier. There are no easy answers, but certainly one is that the UCs could dramatically increase their enrolled student numbers if they allowed students to enroll as 100% remote students…they do have the ability to do that.

I thought the Times article made some good points - particularly as pertains to access and affordability. At the same time, his line about a hard working student having their place “taken” by a wealthy one seemed unrelated to the data he was talking about. Whether you support testing or not, test scores have a high correlation with wealth and I’m not at all sure that bringing them back would ensure the poor, hardworking student of a different outcome. The unpleasant truth is that “elite” college admissions favors wealthy students in every way – from test scores to ECs to private college counselors and so on.

Thanks for posting, I found that thought-provoking and compelling. I hadn’t realized that the UC Regents voted unanimously against the findings in the task force, wow.

I am wondering if we are missing something about the TO story overall, per this thread overall. We have:

  • Does TO/TB help increase access/diversity? This data challenges that narrative effectively, in my opinion
  • Are schools motivated for TO/TB to increase apps/reduce acceptance rate/look more competitive?

Now, what else is TO/TB analysis for…? What questions are we looking at this data to answer?

1 Like

UC’s were never TO. They went test blind for the 2021 admission cycle so I doubt there is substantial data available after one admission cycle.

Yes, they have now permanently gone test blind. Their original plan was to go TO for 3 years and then test blind and eventually come up with an alternate admission test besides the SAT/ACT.

Here is one data source:

I’m not sure that being adequately prepared to succeed at an elite college is a “relatively low bar.” And even if it is, I not sure that this should mean that it is an improper standard. By discounting an adequately-qualified-to-succeed standard, I wonder of you aren’t just reinserting “the most ‘meritorious’ students get in (however they define that)” standard.

A standard isn’t necessarily any less subjective or more appropriate just because it applies to everyone. With such standards, the “qualitative assessment” is built into the standard itself. The part you discount as “different opinions of whether tests measure anything useful” is where the subjectivity lies.

For example, suppose a school admitted students based solely on the wealth of the parents, with rich families in, and everyone else out. That standard would apply to everyone but is without a doubt a subjective qualitative assessment of questionable value. With regard to the tests, they are uniform, but no one is quite sure what exactly they are measuring, how context impacts those measurements, or how this information is of value to schools relative to a policy where tests are optional or blind.

Also, the tests cannot possibly provide a point of comparison between students who don’t apply. Given the goal is to attract more students who might not otherwise apply, this is a key consideration.

1 Like

Major would be less relevant for 2nd year retention figures, which were also similar between test submitters and test optional. Graduation rates also seem similar between test submitters and non-submitters at colleges that make it awkward to switch majors.

I don’t have breakdown of graduation rate by major, but there are difference in major distribution by test score. For example, the CollegeBoard report at https://reports.collegeboard.org/pdf/2020-total-group-sat-suite-assessments-annual-report.pdf lists the following average SAT scores by intended major. Students intending math heavy majors generally have higher average scores, so intended math majors are likely to be overrepresented among test submitters.

Average SAT by Intended Major
Math – 1247
Physical Sciences --1206
Social Sciences – 1166
CS – 1156
Engineering – 1140
Biology – 1134
English – 1122
Psych – 1068
Business – 1066
Performing Arts – 1047
Health/Nursing – 1044
Education – 1021
Agriculture – 968

The actual major distribution in the NACAC aggregate study of many colleges is below.

Major Distribution by Submitter vs Non-Submitter
Humanities + Arts – 23% of submitters vs 27% of non-submitters
Business – 17% of submitters vs 12% of non-submitters
Social Sciences – 15% of submtters vs 17% of non-submitters
Biology – 8% of submtters vs 6% of non-submitters
Comm/Journalism – 7% of submtters vs 7% of non-submitters
Psych/Social Work – 6% of submitters vs 10% of non-submitters
Math/CS – 5% of submtters vs 3% of non-submitters
Health/Nursing – 4% of submtters vs 4% of non-submitters
Education – 3% of submitters vs 5% of non-submitters
Physical Sciences – 3% of submtters vs 2% of non-submitters
Engineering – 1% of submtters vs 1% of non-submitters
Agriculture – 1% of submtters vs 1% of non-submitters

There are few engineering students in the studies reference in earlier posts, which emphasize LACs. Available numbers for WPI are below, which is a test blind (formerly test optional) engineering college. The portion choosing engineering majors and graduation rate trend seem largely unchanged, after a good portion of class started being admitted without test scores.

Worcester Polytechnic Institute Stats
2005 – Grad Rate = 74%, 86% Engineering + CS, 1% Humanities + Social Science
Test Optional in 2008
2010 – Grad Rate = 80%, 87% Engineering + CS, 1% Humanities + Social Science
2015 – Grad Rate = 85%, 88% Engineering + CS, 1% Humanities + Social Science
2020 – Grad Rate = 89%, 90% Engineering + CS, <1% Humanities + Social Science
Test Blind in 2021

I agree! In this new era of TO, maybe a compromise would be staying “test optional”, but IF you submit scores, you need to submit them all:

Princeton is one example currently “encouraging” similar:
“For those who choose to submit testing, we allow applicants to use the score choice feature of the SAT and accept only the highest composite score of the ACT, but we encourage the submission of all test scores.”

IME, for the vast majority of submitters, “encourage” means “required to be seriously considered”. They state similar on AP scores: they encourage ALL.

It dilutes the meaningfulness of standardized testing to have a one-sitting, reasonable- prep 1400 get compared to a tried-five-times and did 50 hours of prep to get there when the starting score was 1200. Maybe that is a factor in why testing has not been found to be significantly predictive in recent years: how can it be in the unleveled playing field of superscoring and multiple takes?

1 Like

My observations were about how the discussion upthread had been revolving on whether there was a qualitative academic difference between students that could be ascertained through test scores. We agree that test scores offer little additional insight on whether an applicant of a T20 to maybe even T50 is academically qualified to graduate with the other application data points in hand. Certainly selective universities are not satisfied by just admitting students who are capable of graduating. They want the best students to shape the class they feel is optimal, which these days includes consideration of diversity in interest, talent, background and demographics. Some have decided they will have a freer hand in doing this from a larger potential pool to select from by going TO or TB. This is an understandable position for them to take and there is no “right or wrong” about this. At the same time whenever we change standards for anything involving limited spaces, it is obvious there are going to be those advantaged and those disadvantaged with the change and this needs to be acknowledged. This was my original point.

I think a major misconception that many have about elite schools in this country (especially when compared to elite schools elsewhere around the world) is that they are only looking for the best academic talent – while that is one consideration, it isn’t the only one. These schools have a variety of other institutional goals (diversity, competitive athletics, a desire for “interesting/unique” students, keeping donors happy etc) that can result in the admission of students who may have lesser academic qualifications than students who are rejected. That was true before TO, remains true now and will continue to be true if the current policies are changed. Because the schools aren’t transparent about these other goals it leaves parents/students frustrated and angry when they feel less “deserving” students are admitted.

6 Likes

Their CDS often will include percentages of where matriculated students ranked in their classes and also a segmentation of GPA. chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https%3A%2F%2Foir.harvard.edu%2Ffiles%2Fhuoir%2Ffiles%2Fharvard_cds_2020-2021.pdf&clen=1073510&chunk=true Here we see in the 2020-2021 Harvard CDS that 94% of students who were ranked (39%) were in the top 10 percent, also 76% had 4.0 GPA’s and another 18% that had GPA’s between 3.75 and 3.99.

Forgive me if I had posted this earlier, and think I may have on another thread….but my kiddo was told by 2 reps from 2 different colleges (one school) not to submit ACT scores unless they are a 35 or 36 for ED. My kiddo naively relayed the incidents as “It’s super competitive this year”. Reflecting on the Conn College post: No, it’s super manipulative this year.

4 Likes