I have no problem with a school saying “The SAT and ACT suck, and we will not consider it in the admissions process.” It is their school, and they can use any criteria they want for admissions within legal limits. Grades, EC’s, essays, parents’ net worth, make the parents bid for spots in the school, whatever a school wants to do, they should go for it.
My issue with TO is that it appears to be an attempt to game the rankings rather than assemble the best student body. By going TO, schools can hide the scores of their weakest (i.e. hooked) applicants and not get dinged in the rankings for taking an outstanding applicant who simply didn’t score well. But the schools still want credit for their super high SAT range (which they have made higher by going TO).
The fact that many top schools still favor legacies in admissions but are arguing that tests are too weighted towards the privileged is as hypocritical as it gets.
MIT sounds like an outlier. Perhaps there is a unique explanation, such as asking accepted + matriculating students to submit scores over the summer before attending if they have them, like Bowdoin and various other colleges do.
A comparison of admit rate between test optional and test submitters at other selective private colleges for which I could find info below. The median of the 9 selective private colleges with information for the full class was ~50% of applicants test optional vs ~40% of admits test optional, which suggests test submitter admit rate was ~1.5x higher than test optional admit rate. For example, 15% admit rate for submitters vs 10% admit rate for non-submitters.
Test submitter applicants are expected to be stronger on average in the full application (expected to average higher GPA, higher course rigor, better LORs, better ECc/awards, more ALDC hooks, higher income, etc.); so I don’t think it is clear that the non-submitters were being significantly penalized for not having a score at typical selective, private colleges.
Total Class (ED/EA + RD) Wellesley – 60% of applicants test optional, ~50% of admits test optional
Barnard – 59% of applicants test optional, 47% of admits test optional
Colgate – 59% of applicants test optional, 40% of admits test optional
Boston U – 58% of applicants test optional, 43% of admits test optional Tufts – ~50% of applicants test optional, ~41% of admits test optional
Davidson – ~50% of applicants test optional, 36% of admits test optional
Emory – ~50% of applicants test optional, 31% of admits test optional
Amherst – 49% of applicants test optional, 37% of admits test optional Vanderbilt – 44% of applicants test optional, 39% of admits test optional
Only ED / EA
Penn ED – 38% applicants test optional, 24% of admits test optional Notre Dame REA – 49% of applicants test optional, 31% of admits test optional
Using the same methodology I laid out. Yes, it could be a range, like the TO admission rate could be 0. 50% is roughly the middle of the possible range, given consistent SAT/ACT acceptance rates.
That’s making an assumption that no ranking system will make adjustments starting next year, when the first pandemic-driven TO class will be factored into the data. I’d be very surprised if USNews and others just take and weigh reported standardized test score distributions as they have this year.
I agree, though others have objected to the initial assumption when I’ve stated this.
Thanks for the summary data - that’s what I didn’t have for comparison. MIT does look to be outside the norm of this sample set.
And fwiw, I highly doubt MIT padded their “admits” stats retroactively.
This is an interesting article I found. As I and many others have asserted, TO had little to do with diversity initiatives other than happening to occur at similar times. Schools that were TO prior to the pandemic showed tiny gains in diversity initiatives compared to those schools that were requiring tests.
One other key quote highlighted an issue that I have not seen discussed anywhere else:
Colleges had to hire many more admissions staffers and application readers to sift through applications without test scores. Test scores are an efficient way to reduce the applicant pool. Based on this pre-pandemic research, it may seem that the small diversity gains from test-optional admissions are not worth the cost.
The admissions department employees at colleges are not volunteers. There is a cost to doing what are increasingly much more in-depth reviews of applications. Who pays for that? The application fees are unlikely to pay for much of that cost unless TO universities are using some other weed out tool that we are not aware of. If they are simply weeding out the additional applications, then why don’t they post those criteria so applicants don’t waste their time and money in applying?
Many schools have always hired temporary readers during the application season, well before test optional admissions. These hourly jobs pay around $15-$25/hour. With an average of 12 minutes or so spent per $50-$75 per application (of course not all colleges charge an application fee), the app fees pay for these readers, even assuming some proportion of app fee waivers.
I have not seen or heard that TO has caused average application read time to increase. Do you have a source for that?
I have not seen or heard that not using prescreens on applications makes the process just as efficient as using prescreens on applications. Do you have a source on that?
Logically this position is really hard to understand. How can the new definition of “holistic review” of an application not be much more time intensive?
Also, are you saying that applications are being evaluated by part time employees who make close to minimum wage?
I don’t understand this…what you are saying/asking here?
Again, I don’t understand. I am not aware of there being a ‘new definition’ of holistic review, nor am I aware that there’s only one way to read apps in a holistic manner.
Yes, at schools that hire external temporary readers. Some schools hire hundreds of readers. They do receive training, some readers have done the job for many years, and honestly evaluating apps against a given school’s criteria is just not that difficult.
Here is the complete abstract from the study underlying the article, with my bolds added:
Abstract
This study examines a diverse set of nearly 100 private institutions that adopted test-optional undergraduate admissions policies between 2005–2006 and 2015–2016. Using comparative interrupted time series analysis and difference-in-differences with matching, I find that test-optional policies were associated with a 3% to 4% increase in Pell Grant recipients, a 10% to 12% increase in first-time students from underrepresented racial/ethnic backgrounds, and a 6% to 8% increase in first-time enrollment of women. Overall, I do not detect clear evidence of changes in application volume or yield rate. Subgroup analyses suggest that these patterns were generally similar for both the more selective and the less selective institutions examined. These findings provide evidence regarding the potential—and the limitations—of using test-optional policies to improve equity in admissions. https://journals.sagepub.com/stoken/default+domain/RMRIR6XS4QICVZDDIAZM/full
And a different article discussing the same study:
The findings associated test-optional policies with:
A 3-4 percent increase in Pell Grant recipients enrolled.
A 10-12 percent increase in first-time students from underrepresented racial/ethnic backgrounds.
A 6-8 percent increase in first-time enrollment of women.
Any testing process needs to have a control group, and the article I linked is the first article on the topic of TO that I have seen that found a control group of non-TO schools. It turns out all schools, TO or otherwise, have made diversity and first gen a priority, and they have all been successful over the last few years with their efforts. Linking TO to those outcomes was conflating two separate initiatives to create the appearance of a causation when, with additional data from the control group, the relationship is purely an unrelated correlation.
There was separately an increase in women attending college that does appear to be a function of TO since I am not aware of colleges making greater enrollment of women, and by extension, less enrollment of men, a priority given that women are now 60% of all college students. This outcome is worth additional discussion.
Rosinger said not to expect “dramatic gains” in diversity from eliminating testing requirements because the other qualifications that admissions departments weigh, such as extracurricular activities and advanced high school courses, “tend to privilege the same students who are privileged by test scores.” Well-to-do families can pay for extras like sports and music lessons and high schools in wealthier neighborhoods are more likely to offer advanced coursework.
There is really no other side to this point. The other criteria for admission, like AP courses and EC’s, overwhelmingly favor the wealthy.
The article you linked describes the study I linked. I suggest that rather than relying on the spin in the article, you take a look at the actual study. Here is a quote:
Relying on the policy adoption timing for more test-optional institutions than any prior published research, this study offers evidence on the effects of test-optional policies across the wide variety of institutions that had come to comprise the test-optional movement as of 2016. In contrast to earlier work, I find an increase of 10.3% to 11.9% in the number of URM students who matriculated following test-optional policy implementation during this era. At the same time, according to additional analyses (available upon request), there were no detectable changes in the enrollment of White and Asian students after test-optional policies went into effect. The finding that test-optional policies increased enrollment for URM students at private institutions contributes to a broader literature on efforts to increase racial/ethnic diversity among undergraduates at selective institutions.
The discussion goes on to note that because the percentage of URM students was small to begin with, the overall percentage increase compared to the whole sample is also small, but the 10.3% to 11.9% increase is nonetheless real and associated with the test optional policy.
One can argue that an 10-11% increase in URM enrollment due to TO is only a modest gain, and that more needs to be done. But as the author of the study said, “it is a step in the right direction.”
As for the Rosinger quote, I don’t think “dramatic” gains are the appropriate measuring stick. Moderate gains count too. And we don’t yet know whether the gains will become more significant as TO and test blind became more mainstream.
As for your last sentence, Yes, the wealthy have many other advantages other than just test scores, and doing away with test scores alone is not going to fully address the issue.
I’m not sure people fully understand the link between wealth and test scores, but here is a graphic that helps put it in perspective.
I agree with you that attributing the increased enrollment of certain demographic groups to test optional policies is a classic example of confusing correlation with causation. And I also don’t think the increased enrollment of women in colleges is a result of test optional policies. While there have been many outreach programs to female students from male-dominated fields, there’re few, if any, similar programs to male students from female-dominated fields. Test scores of male students may be more broadly distributed than their female counterparts, but I don’t think their test scores, on average, are any higher.
"I want to suggest a way to make me stop talking about standardized tests all together. For the record, I’ve never been opposed to colleges using them if they want; I’m opposed to the misuse of tests (such as one highly selective university telling students, “we really like a 32, but we love a 33, or the fairly draconian SAT cuts Harvard apparently uses in its admissions evaluation system, their claims of holistic review notwithstanding.)
Here are the details:
IPEDS stops collecting incoming freshman class test scores
CDS publishers take test scores off the survey
NACAC issues a statement saying colleges should not publish the scores anywhere
College Board and ACT do the same. Going first would be an enormous sign of goodwill. (That sound you hear is probably people at The Agencies laughing at the idea of goodwill between them and me. However, I’ve said on numerous occasions that I honestly like almost all the people I’ve met who work at either place; I talk to them at conferences, and have even had a few beers with them over the years. I just don’t like their corporate business practices.)
Then, colleges can use them all they want. And, presumably, they’ll be free to admit students with lower scores without fear of hurting themselves in the rankings."
Since this is not a PhD defense, the conversation would likely be more productive if you avoid stating what is only implied, especially when using provocative buzzwords.
I don’t think we need to do all or most of those things. I have said repeatedly that schools should use any criteria they want to select their incoming class. They could go by parents’ net worth or applicants’ shoe size or favorite color or a coin toss for all I care. Their school, their rules. Just be upfront about how the class is being selected with both the students and services like USN&WR that help students select colleges.
I can even live with schools going test blind, as wrong as I think that approach is, in part because it is honest, and in part because the marketplace of ideas will occasionally strike oil and will occasionally drill a dry well, and we don’t know which it is until someone tries. If the University of California discovers a new Rosetta Stone of student selection, then we are all better off. If the U of C completely screws this up and burns its academic reputation to the ground in the process, then we are also all better off because we would then know that test blind was a bad idea. Either way, we are all better off.
My biggest issue is with TO, because I think it is hypocritical and makes the acceptance process more opaque. This is unfair to the students on several levels, from inducing students to apply to schools they have no chance of acceptance, to being open to abuse in acceptance by schools looking to reward privilege, to not giving students a clear picture of how their fellow classmates got into the school.
At the end of the day, I think the market will resolve this issue. Some schools will decide that they need the tests, some will decide they don’t, and many will probably try to navigate a middle ground. Hopefully schools that are more transparent about their process will be rewarded with better students and ultimately better rankings.