<p>As a former poster here on CC used to say in beginning his responses to wild misrepresentations: </p>
<p><strong><em>Sigh</em></strong></p>
<p>
</p>
<p>This is the problem with the internet. Bad and outdated information is eternally circulated as factual and current. The study OldScarecrow references is based on data that are now over a decade old. It is also regularly misinterpreted and misrepresented by those with a particular point of view they wish to push. I am sure, however, that we will continue to see it here on CC for the next two decades or more!</p>
<p>If readers wish to get a current and far more accurate assessment of the actual (as opposed to the theoretical) preferences the authors of the study discussed, they need go no further than this link which is based on the actual decisions made by tens of thousands of students offering details on over 120,000 college acceptances and is being regularly updated as more responses are gathered:</p>
<p>[2012</a> Updated College Preference Rankings](<a href=“http://www.parchment.com/c/college/college-rankings.php]2012”>http://www.parchment.com/c/college/college-rankings.php)</p>
<p>The Revealed Preference study, on the other hand, is based on a very small data set, never received enough peer academic support to get it accepted into a leading scientific journal and has been both discredited and mostly forgotten by scholars of higher education since its publication nearly ten years ago. From the critiques I have read, I understand that there was a glaring oversight in the study that brought into question the entire notion of yield protection as an explanation for what the authors observed.</p>
<p>For those of you unfamiliar with the study (and because Im enjoying a nice glass of wine on a lovely holiday weekend) Ill take the time to review the details.</p>
<p>In the Revealed Preference study, the authors were attempting to construct a theoretical framework based on game theory to rank schools using win-loss ratios over multiple student matriculation decisions. They were working with a statistically-limited data set (far smaller than the data set I linked to above) and were more concerned with the mathematical model than the actual results. </p>
<p>One of the minor questions they examined had to do with the admission curves of students plotted over SAT ranges for leading schools. They showed via graphs that MIT had an almost perfect correlation between SAT scores and likelihood of admission. It was a smoothly upward sloping curve. Harvard’s curve flattened in the SAT ranges from the 93rd to the 98th percentiles and then moved upward from the 98th to the 100th percentiles. Yale (which the authors also suggested practiced some “yield protection”) showed a dip in the SAT ranges from the 93rd to the 98th percentiles and then a steep upward slope into the highest ranges. Princeton (which the authors suggested practiced more yield protection than Yale) showed a slightly deeper trough in the SAT ranges from the 93rd to the 98th percentiles and then, like both Harvard and Yale, a steep rise from the 98th to the 100th percentiles.</p>
<p>The theory proposed by the authors was that the dip for Yale and Princeton in this SAT range from the 93rd to the 98th percentiles was the result of a conscious policy of avoiding accepting the students who were less likely to matriculate. The theory was that these students (i.e. those in the 93rd to the 98th percentiles) were more likely to matriculate at Harvard and by accepting fewer of them, yield would be protected. </p>
<p>On the face of it this never made much sense since it never really explained why all of the schools would show a very large increase in the probability of admission in the 98th and 99th percentiles. If the authors’ theory was correct, wouldn’t these students have been even more desired and thus even less likely to matriculate at Princeton and Yale (which would lose them to Harvard) than those in the 93rd to 98th percentiles? The authors gave a not-too-convincing explanation that the schools accepted these highest range students because not to do so would make their devious admissions strategies even more obvious.</p>
<p>This is just poor social science.</p>
<p>The most devastating critique has come from others (I’m sorry that I can’t find the articles to link now) who pointed out that this interpretation of the data left MIT’s curve totally unexplained. Was MIT less concerned about its yield and why was Harvard’s curve flat in the 93rd to 98th percentile SAT ranges unlike MIT’s that showed no such thing? Was Harvard practicing ‘yield protection’ against MIT?</p>
<p>What these critics have pointed out (and in the course of doing so have completely undermined the study) is that the authors failed to take two other related and critically important elements into account. Specifically, Hoxby and the others failed to consider the effect of overall student body size and the percentage of each student population made up of varsity athletes.</p>
<p>A higher percentage of varsity athletes in the student population will cause more of a dip in this ‘high’ but not ‘highest’ range of SAT scores. MIT has by far the lowest percentage of varsity athletes, and there is an unbroken upward sloping relationship between SAT scores and the probability of admission. Harvard, Princeton and Yale, all committed to the same range of athletic programs, each have approximately the same total number of varsity athletes and far more of them than MIT. However, the same total number of athletes at Harvard will constitute a lower percentage of the total student body than will be the case at the significantly smaller Yale and a far lower percentage than at Princeton where, at that time, the student population was dramatically smaller than at the other two schools.</p>
<p>The effect of these differences should have been clear.</p>
<p>Though all three schools admit varsity athletes who are strong students, many of whom have the highest SAT scores, it is a fact that, on average, the varsity student admits have an SAT range much closer to the 90th than to the 98th percentile range. If a higher percentage of the student body consists of this group, then there must necessarily be correspondingly fewer offers of admission just above this group. Each of these schools, if forced to choose, will be more likely to accept the California all-state quarterback with a 4.0 GPA and an SAT score in the 90th percentile range, than a very good student without significant extracurricular accomplishments who has an SAT score in the 90th to 98th percentile range. There are far more applicants in that latter group and very few of the star quarterbacks. In the very highest ranges of the SAT, all three schools showed increases in the probability of admission in part because these students, like the star athletes, are a much rarer breed and, like those athletes, highly recruited.</p>
<p>This is not my theory, but one detailed by critics of the Hoxby study. I suspect that their criticisms are well-founded as the authors of that study have been unsuccessful in getting the original paper accepted by leading scholarly journals. Given how old it is now, it’s probably a moot point. I would also guess that if similar curves were drawn for small LACs we would see even more pronounced ‘dips’ in the relationship of SAT ranges to probability of admission. At these smallest schools, varsity athletes tend to constitute even larger percentages of the student body. Williams and Amherst, in particular, are in this situation. They have some of the brightest students in the country with astronomical SAT scores but I would predict that the likelihood of admission for the high (but not ‘highest’) SAT scorer who is ‘unhooked’ dips considerably.</p>
<p>The other part of their study dealt with the likely preferences of students. As the authors themselves noted, their study did not lead to the creation of any kind of reliable ranking, or league table, but instead, suggested a method for creating such a table if more data (and more reliable data) were available. Not being a mathematician, Ill take no stand on the accuracy of their model in predicting these outcomes. I would, however, question some of their conclusions resulting from the model. In fact, the outcomes they suggest in their model seem improbable. They predicted, for instance, that students interested in engineering, math, computer science and the physical sciences were more likely to matriculate at Yale than at M.I.T. when given the choice. This may be true, but it runs counter to general wisdom and to what is regularly reported by students here on CC. </p>
<p>What has changed at Princeton (and is about to change at Yale with the expansion of the undergraduate student body) is a significant increase in class size and a concomitant decrease in the percentage of varsity athletes in the overall student population. Were the same curves to be drawn today for Princeton and Yale, they would almost certainly look more like Harvard’s.</p>
<p>This entire thread seems a bit silly to me but I couldnt let the previous posters misrepresentations stand uncorrected.
</p>