<p>Ok, since all sorts of folks have been asking / PMing me / etc, here is the official statement on the Writing SAT II: </p>
<p>We will see the score. It is part of your application. We will not be "ignoring" it, any more than we can "unsee" it. </p>
<p>However, our models do not associate the same predictive power with the Writing test as with the math and verbal; therefore, you should not stress out about your Writing score while we continue to gather data. </p>
<p>Hopefully this clears up all of the questions!</p>
<p>Just to clarify: Chris is actually referring to the Writing section of the SAT Reasoning Test (aka the SAT I), not the old Writing Subject Test.</p>
<p>Not really; at the very most it sounds like it might have a bit more influence than it has up to now.</p>
<p>(Correct me if I’m wrong, but I think two years ago – that is, when the writing section was implemented – MIT, like many places, chose not to consider the writing score until they had an idea of its relevance.)</p>
<p>^Actually, the CB introduced the writing section to the SAT I in 2005. It would make sense if MIT waited a year or so to get the predictive results of the writing and once it was noted that the other 2 sections are better predictors of college success the writing section was relegated to a third-tier type thing, with some consideration but not a heavy one. I have no evidence to this though. I think as long as you don’t do horrible on it (sub-600/650 or so), it’s fine.</p>
<p>It’s odd that MIT hasn’t found the Writing section to be very predictive. The College Board’s study found that that section was the most predictive of the three.</p>
<p>My suspicion is that most people are in HASS majors around the country, and that writing would be far more predictive for these majors than engineering/science majors (like MIT is full of). Since the SAT’s study seems to just be an overall estimate rather than broken down by field, we can’t tell for sure.</p>
I posed this question to Chris a while back. He noted that while it was true in general it wasn’t specifically true for MIT. I found it odd as well, considering the study mentioned “across all colleges and institutions.”</p>
<p>@PiperXP
If I’m not wrong, the indications were for freshman year. So despite the scientific major focus that technology institutions have, the discrepancy you’re thinking of would be lower if it’s for freshman year where there’s a bit of everything in all colleges, so it’s still a bit weird to find that MIT doesn’t follow the general rule (again, they said “across all institutions”.)</p>
I’m unsure if “across all colleges” means that this was an average or that this behavior was exactly the same for every one of these colleges. I expect the first and would be surprised if they meant the second. I also wouldn’t expect that to mean that each major followed that sample, only that each school on average did. But this is just speculation.</p>
<p>But let’s look at this:</p>
<p>“Each year, the College Board serves seven million students and their parents, 23,000 high schools, and 3,500 colleges[…]”</p>
<p>“Preliminary recruitment efforts were targeted based on the 726 four-year institutions that received at least 200 SAT score reports in 2005. These 726 institutions served as the population[…]”</p>
<p>I would certainly assume that “across all colleges and institutions” refers to the population’s study of 726 colleges, rather than all 3500 colleges they serve. MIT is a very special school, and far more technical than most. I wouldn’t be surprised if it blows normal stats out of the water ^.^</p>
<p>Keep in mind that the first full class to take the Writing component of the SAT (and not the subject test) only just graduated from MIT. We haven’t had full data before, at least that we were comfortable with - hence our historical policy. I haven’t had the time to really dig through the CollegeBoard report, so I can’t talk about their methodology, only ours. </p>
<p>We also, as Piper has said, have a slightly different student body makeup than most colleges. </p>
<p>I assure you, we’re looking at this stuff very carefully, and constantly revising our models so that they’re up to date.</p>
<p>^ I didn’t mean to suggest that MIT wasn’t practicing what seems to work for them, as indicated by their internal statistics—just that it was interesting how your analyses thus far have yielded a different conclusion than that published by the College Board.</p>