Perhaps you are making it too complicated? The ranges represent the scores of those students who submitted and were admitted. If your daughter submits, that is what she is up against. No adjustment necessary.
In other words, with regard to your daughter’s application, it doesn’t really matter what the range “would have been if we were back in the good old days and everyone was submitting scores.” If her score doesn’t compare well to other similarly situated students (similar school type, region, transcript, ECs, essays, etc.) who are currently being admitted, then her odds are longer.
This is somewhat misleading. It is not as if MIT experimented with test optional, and then decided it it was a failure based on the results of the experiment. MIT was never truly “test optional.” Essentially MIT made a temporary accommodation for those who could not submit test scores because of Covid, but told students who could safely take the test should submit the scores. After more normalcy returned, MIT resumed requiring tests from everyone.
Here is how @MITChris put it in a different thread:
And in response the observation that “many have the misconception that MIT experimented with TO and that it was a failure, but that doesn’t seem to be the case at all,” @MITChris responded:
In short, MIT didn’t “reverse course” on Test Optional. It was never truly Test Optional.
I am recently retired and now spend part of my time helping guide students through the college application process.
What I have been telling these students is that they should try to find the 25-75 range prior to Covid, and if their current score is above the previous 75 mark, they should absolutely submit.
My reasoning is that the underlying abilities of students doesn’t change that much in a few years. So if the admissions department recognized that as a strong score back then, they should still do so today.
What happens if/when the Supreme Court rules against Harvard in the admissions discrimination case, and if they do so broadly rather than narrowly – does that increase or decrease the test optional trend or have no impact?
For an example of how the appearance of a standardized testing profile changed after a new policy, here are the ranges for Bates in the years just before (2019) and just after (2020) it began requiring test scores from all enrolled students:
That sounds like it’s above the previous 50th percentile, so I would probably still recommend submitting unless the student is from an over represented group.
An interesting contrast to MIT going beck to test required (from test preferred if you can get a test safely) is that Caltech will continue being SAT/ACT blind for another two years.
Perhaps, in Caltech’s case, the SAT/ACT is not particularly useful in showing the necessary level of academic strength, in that it is too low level in comparison. Presumably, MIT does not believe that to be the case for itself.
“If your daughter submits…” That really is the crux of the issue. If your score would’ve been in the top 50% pre-TO, but now isn’t, what do you do?
My argument is that it’s only 50%, 40%, 30% - whatever - of what she’s up against. Maybe I’m making it too complicated, but I find this a pretty fascinating topic. And it would be nice to know how the majority of admitted students, who for whatever reason didn’t submit scores, did on the exams (assuming they are again sble to take them). As many have said above, what good does it do when second and third tier schools are reporting artificially high test score averages that make them look like ivies and juices their ranking.
If you look at the data, would you assume a normal distribution of scores? Hard to imagine they wouldn’t be at least somewhat skewed. It would be cool if someone came up with a way to model the data to interpolate test score averages and ranges based on the number of TO acceptances, % of varsity/scholarship athletes in the student body, other hooked acceptances, HS GPA, class rank, etc. I’m kinda surprised somebody like Niche hadn’t already done this from common data set info.
If focusing on the median is the route you want to go, then IMO it makes much more sense to base the decision on the current landscape, rather than the past landscape.
What good does it do? It tells you the range of scores of those who submitted and were admitted. If you are considering whether or not to apply and/or submit then this might be relevant to your decision-making process, regardless of whether you feel these scores are “artificially high” and/or the school is trying to “juice their ranking.”
In short, don’t apply based on how you think the process should work, apply based on how it does work.
Similarly, I don’t think a formulaic before/after conversion chart would be possible or particularly useful, but then my concern is navigating the current TO environment. Other posters seem more focused on complaining about it and/or predicting its inevitable doom.
If you want to better understand how TO has worked historically, @Data10 has written about Bowdoin in the past, so you may want to search for that. (Bowdoin has long been test optional, but has asked TO students to submit scores after they have been admitted, and provides those scores to CDS.)
Truer words were never spoken, and goes beyond the test score situation.
It’s not clear they still do this, as for the last several cycles not all students had any test scores to report. A call to admissions would clarify the current practice.