<p><a href="sakky:">quote</a>No, that is a completely false. The RP study NEVER relies on cross-admit data. The RP study uses information about admissions decisions (but NOT cross-admit information) as raw data.
[/quote]
</p>
<p>What you are calling "admissions decisions" I have been calling "cross-admit decisions" (matriculation decisions of students accepted to two OR MORE schools). Other posters apparently understood that, but to remove any ambiguity let's call it "multiple-admit matriculation decisions". Given that phrasing, do you have any further disagreement with the statement that the RP rankings are derived entirely from a list of a few thousand such decisions (i.e., that those decisions are the sole data from which the ratings are computed by some statistical algorithm)? </p>
<p>
[quote]
It then MODELS the data to fill the missing gaps,
[/quote]
</p>
<p>Everyone understands that, and the discussion prior to your arrival was about other matters. You are attacking a strawman in belaboring this point. Notice, for instance, that from the first posting in the thread (the one you replied to when entering the discussion) I referred to the "numerical weights" of schools in the posting you first replied to here; the weights are the (estimated) parameters of the MODEL of the data, as you call it.</p>
<p>
[quote]
Hence, the modeled data is BETTER than the cross-admit data as long as the model holds, because cross-admit data, by definition, does not include missing data.
[/quote]
</p>
<p>Whether it holds is the 64 dollar question. The RP paper is silent on that question, since they don't publish the cross-admit data, and they don't provide any measure of the quality of the model as a predictor of the cross-admit decisions.</p>
<p>If the model were good, the ranking should linearize nicely. What the confidence probabilities from the MCMC simulation seem to show is that it doesn't; there are distinguishable tiers of schools, as we knew without the RP study, but there isn't necessarily more ranking potential beyond that.</p>
<p>
[quote]
"Actual choices are the absolute CRUX of the paper"</p>
<p>Uh, wrong. There is a world of difference between somebody preferring, say, Harvard and actually having the CHOICE of Harvard. Just because you don't have the actual choice of a particular school doesn't mean that you don't want it.
[/quote]
</p>
<p>You're addressing a different issue. I'll come back to it later, this posting is too long as is. </p>
<p>
[quote]
[quote]
Maybe you didn't read the paper. Where people apply is not a form of revealed preference that they attempt to model, and the RP ranking rewards specialty schools that are negatively "preferred" by a majority who would never apply there (BYU, Caltech, and others). In the other direction, a school that is "preferred" enough to be a favorite safety school will suffer in the rankings.
[/quote]
Perhaps you didn't read the paper. Specifically, you may not have read section 7 in which the authors explicitly discuss the notion of self-section and perform a RP study that measures only those students who are interested in technical subjects,
[/quote]
</p>
<p>You did not understand the paper. Section 7 <em>does not</em> model the "revealed preference" information disclosed by non-applications, nor does any other section of the paper attempt to model that form of revealed preference. The authors do not propose any model to deal with that question. All they do in section 7 is repeat the RP ranking for the subset of students who indicated an interest in (for example) science. The RP methodology, as it stands, does not even try to deal with the particular form of revealed preference contained in an applicant's selection of target schools. </p>
<p>
[quote]
Now, I agree with you that Caltech's ranking in the RP probably is inflated relative to the entire set of schools, and in particular, may well be inflated relative to those schools that have little overlap with Caltech (i.e. more humanities-oriented schools). But that's not the point that we're discussing. The point we are discussing is what is Caltech's RP ranking relative to MIT, both of which are obviously technically oriented schools.
[/quote]
</p>
<p>No, that wasn't the question at all. It has nothing to do with the fact that two particular schools, happen to be misranked relative to each other. The problem is that the ranking methodology used is stablest and most accurate for the schools that will end up on top. If there is possibility for a school with less than 20 matriculation decisions to swing into the second place (other schools in the top 6 had hundreds of applications and dozens of matriculation battles covering all possible pairings), and to have its desirability mis-estimated so badly (50 Elo points higher than MIT when it's more like 200 lower), that tends to confirm that the model is unstable. If it is unstable at the very top, it only gets worse going down the list.</p>
<p>
[quote]
What counts for the purposes of this discussion is whether Caltech's RP ranking relative to MIT is inflated.
[/quote]
</p>
<p>No, Caltech vs MIT alone doesn't matter, it's the implication for the whole model. If Caltech beating MIT were happening with both schools below number 10, it would be a minor point. That it happens at number 2 and number 4, with a large Elo point discrepancy, combined with other information (Yale-Stanford and others), corroborates pre-existing suspicion from the mathematics, that the model is unstable with this amount of data.</p>