<p>
[quote]
The RP paper says it many times over, since it's obviously true: they take a list of cross-admit decisions (i.e. for each student in the sample, specify the list of schools that accepted the student and which of those was selected), and from that information alone, produce a ranking. That is what it means to "rank universities by cross-admit decisions".
[/quote]
</p>
<p>No, that is a completely false. The RP study NEVER relies on cross-admit data. The RP study uses information about admissions decisions (but NOT cross-admit information) as raw data. It then MODELS the data to fill the missing gaps, notably the missing information regarding schools that a student would have wanted to go to (but didn't get admitted to) or schools that the student didn't even apply to in the first place. Hence, the modeled data is BETTER than the cross-admit data as long as the model holds, because cross-admit data, by definition, does not include missing data. </p>
<p>
[quote]
The RP paper models what those students might have chosen from ANY GIVEN MENU of schools (not just 2) out of the full set of about 110. This is an important point, because it constrains the possible models that can be used.
[/quote]
</p>
<p>I was talking about an interpretation of the RP study for the purposes of THIS thread. Since you were the one who was talking about cross-admits, which by definition, means a comparison of only 2 schools, then I was interpreting the RP information thusly. </p>
<p>
[quote]
Actual choices are the absolute CRUX of the paper
[/quote]
</p>
<p>Uh, wrong. There is a world of difference between somebody preferring, say, Harvard and actually having the CHOICE of Harvard. Just because you don't have the actual choice of a particular school doesn't mean that you don't want it. </p>
<p>
[quote]
Unfortunately the amount of missing data (the sparsity of the matrix of cross-admit results) is a problem for the particular model that they use. In this application it is not a "standard social science model" as you have often claimed.
[/quote]
</p>
<p>It is no more problematic than the references to the chess modeling that are referenced in Glickman (1999, 2001). </p>
<p>Nevertheless, neither me nor the authors claim that the RP study is comprehensive or complete. I simply claim is that it is better than the other available ranking systems out there and is also better than raw cross-admit data. For example, which mainstream social science model does the USNews ranking adhere to? Or Gourman? </p>
<p>
[quote]
Maybe you didn't read the paper. Where people apply is not a form of revealed preference that they attempt to model, and the RP ranking rewards specialty schools that are negatively "preferred" by a majority who would never apply there (BYU, Caltech, and others). In the other direction, a school that is "preferred" enough to be a favorite safety school will suffer in the rankings.
[/quote]
</p>
<p>Perhaps you didn't read the paper. Specifically, you may not have read section 7 in which the authors explicitly discuss the notion of self-section and perform a RP study that measures only those students who are interested in technical subjects, and finds that Caltech STILL outranks MIT. </p>
<p>
[quote]
That "explanation" is reversed (it would lower Caltech's rating and you are trying to explain an inflated rating). More importantly, any explanation based on what different data might have shown is a concession that the model is unstable -- the rankings are not a reflection of reality but of accidents in the data, because the method of ranking is sensitive to accidents. That is precisely what one expects for the RP model with this amount of data.
[/quote]
</p>
<p>Again, nobody, not even the authors, is contending that the study is complete. That's not the point. The point is that the study's data is MORE complete than actual cross-admit data precisely because cross-admit data by definition has large gaps of information (again, those who don't get into both schools or don't even apply to both schools). </p>
<p>Now, I agree with you that Caltech's ranking in the RP probably is inflated relative to the entire set of schools, and in particular, may well be inflated relative to those schools that have little overlap with Caltech (i.e. more humanities-oriented schools). But that's not the point that we're discussing. The point we are discussing is what is Caltech's RP ranking relative to MIT, both of which are obviously technically oriented schools. </p>
<p>
[quote]
If the sample population (group 2) doesn't typify the population they want to measure (group 1), the results can be qualified as being the preferences of "applicants with attributes X,Y,and Z" rather than "what applicants prefer".
That is a semantic difference, and doesn't impinge upon whether the study accurately described its data, i.e. was it effective as a model of group 2's cross-admit choices.
[/quote]
</p>
<p>See above. Again, nobody is saying that Caltech's RP ranking isn't inflated relative to the entire set of schools. It probably is. The authors say so explicitly. </p>
<p>What counts for the purposes of this discussion is whether Caltech's RP ranking relative to MIT is inflated. </p>
<p>
[quote]
However, you are right in one way that is bad for the RP study: if the results would be substantially different for the intended population (group 1) compared to the sample population (group 2) it indicates sensitivity to the sample, which means the results are not reliable.
[/quote]
</p>
<p>Again, nobody is arguing that the results are 100% reliable. Of course they are not. No rankings results are. A few changes here and there in the methodology in USNews can also result in wild swings in rankings. </p>
<p>Again, the real value of the RP is not that it is a fundamentally perfect study. Rather, it is more grounded in theory than any of the other studies out there. What exactly is the theoretical justification and workup for the methodology in USNews? Or Shanghai Jiao Tong? Or THES? Or any of the other rankings? Whatever else you might say about RP, I would hardly say that it is worse than any of those other rankings.</p>