<p>Expanding on my previous comment: OldScareCrow’s cited paper ranks ‘more than 100 colleges’ based on data from 3,240 students.</p>
<p>If you have 100 colleges in a head-to-head match-up (e.g. Harvard v. Stanford, Harvard v. Georgetown, Harvard v. Rice), you would need to have almost five thousand different ‘competitions’, i.e a student who chose either Harvard or Stanford, or Harvard or Rice. In order to get any sort of statistically accurate result, you would need to have about ten different match-ups per school (e.g. of any ten students who go to either Harvard or Stanford, how many will choose Stanford?). That means that your dataset needs to have at least fifty thousand students in it to be meaningful.</p>
<p>It is possible to use one student’s acceptances for multiple schools, i.e. one particular student chose between Harvard, Princeton, and WUSTL, and chose Harvard, so that would be a Harvard-Princeton match-up and a Harvard-WUSTL match-up. However, unless all these 3,240 kids are getting into at least a dozen schools - which raises questions of the type of sample you’re using - you still have the problem of statistically insignificant results. Again, at a bare minimum, you would run something like fifty thousand different match-ups to find a statistically significant revealed preference ranking, and multiple acceptances isn’t going to transform a pool of 3,240 into a pool of about fifty thousand.</p>
<p>But this study had 3,240 students, meaning that some schools may never had any head-to-head match-ups (for example, you may never have had a student who chose one of either Furman or Tufts); nevertheless, the rankings purport to give just such a result. </p>
<p>The study cited by Mastadon are (a) not a decade old, and (b) are at least structurally capable of bearing statistically significant results. </p>
<p>But I guess this is just me, embarrassing Tufts with my posts. ;)</p>