USNWR ('07-91) Avg Rank + WSJ Feeder + Revealed Preferences

<p>how did you do this ranking? i mean technologically
if i could, i would do one on class size , donation rate(shows they have $ after attending) and wsj ranking,and if i could find something on grad school rankings
if anyone could tell me how to go about getting the stats and technological know how for this, please help</p>

<p>Blue Bayou, LACs must be ranked seperately anyway. LACs and research universities are too different to compare.</p>

<p>PS -- including B-schools in the WSJ raises even more statistical bias. Unlike med or law, where most kids are recent grads (ages 22-24), admission to HBS requires on average seven years of work experience. The other top B-schools require work experience, as well, just less. Therefore, the average acceptee into HBS is ~30 years old.....it takes a LOT of stars (and wealth) in alignment to allow someone to move house and home and family across the country for two years, particularly when, if they stayed locally, their employer would pay for a local B-school, which could still be Top 10.</p>

<p>"I can detect no methodological flaws in the paper. More importantly, nobody else in the academic world has ever debunked the methodology, and the easiest way to debunk any academic paper is to present flaws in the statistical methodology."</p>

<p>I’m not attacking the paper, but the whole idea of revealed preferences to begin with. Preferences are thoughts, mental states, beliefs, that can not be observed, let alone studied objectively. It’s like if a child’s mother asks him if he wants a candy bar or a banana for a snack, if he chooses a candy bar, this does not necessarily means he prefers candy bars over bananas, in fact, the truth could be the exact opposite, and such an assertion can not be made on those facts alone.</p>

<p>Revealed preferences assume that their data are “revealing” something about the behavior of those being studied. They try to rationalize the behavior of individuals who have made certain choices, when in actuality, such a claim is unfounded. This practice is a pseudoscience at best; there just is not enough empirical foundation to make such claims. Again, most college “preferences” are done at the application stage, not at the matriculation stage.</p>

<p>The authors of the paper say:</p>

<p>“When a student makes his matriculation decision among colleges that have admitted him, he chooses which college "wins" in head-to-head competition.”</p>

<p>-This completely ignores the fact that many did not apply to college X to begin with, and have thus already “preferred” not to attend it. The only way this study could have any semblance of actuality is if all the people surveyed applied and subsequently were admitted to every school on the list.</p>

<p>Sakky, you say the paper has not been debunked. But if one thinks about it, if the paper is sound in principles of revealed preference theory, maybe it can’t be debunked. This does not make the paper any more accurate, however.</p>

<p>we need some washington monthly in there, give MIT a little boost :D</p>

<p>i just noticed something about the TOP 5:</p>

<p>Harvard
Yale
Princeton
Stanford
MIT</p>

<p>the ranking that actually turns out to spell: H-Y-P-S-M ... in that order...</p>

<p>yep, I noticed that too</p>

<p>
[quote]
The flaw in the revealed preferences is that a school that has many cross-admits with the highest ranked schools loses out...

[/quote]
</p>

<p>Uh, I'm afraid that that's not really a "flaw'" in the methodology. After all, if your cross-admits consistently prefer to go to other schools, then you should be ranked lower. On othe other hand, if you have many cross-admits with other schools and you win those battles, your ranking will increase. That's not a flaw, in fact, that is PRECISELY what a revealed preferences ranking is supposed to do. </p>

<p>Perhaps your objection is that you are saying that a school might be better off by simply having fewer applicants who apply to other schools - in other words self-selectivity. But this point was also addressed in the paper in, my opinion, a highly satisfactory manner by modeling the issue as a transitive self-selection issue as detailed on p. 11 of the study and further explicated in equations 5 and 6. In essence, even if a school has only 1 serious competitor (and I think every school has at least one serious competitor), then you can still model that school by looking at where cross-admits of that other serious competitor. </p>

<p>
[quote]
it does not contain information on head-to-head so you can't judge where a school is among its peers

[/quote]
</p>

<p>Uh, head-to-head data is EXACTLY what table 4 shows, on a modeled basis.</p>

<p>
[quote]
Also the revealed preference is a bit flawed. I think using past us news AVERAGED is the best plan. revealed preference is flawed because kids have different reasons for attending college. This helps PUBLIC SCHOOLS AGAIN BECAUSE STATE TUITION MAKES IT AFFORDABLE FOR MANY KIDS WHO MIGHT CHOOSE UMICH OVER A 40K IVY.

[/quote]
</p>

<p>Uh, no, this objection was dealt with implicitly in the model as captured within the covariate coefficients as shown in equation 6. The authors state in a footnote that they found the result to be trivial. It would have been nice if they had included the actual covariate table they discovered, and they probably will when the paper is finally officially published (right now, it is still a working paper). </p>

<p>Secondly, and more importantly, the paper does not purport to discuss WHY students prefer schools that they prefer. The paper is only analyzing what is preferred. You are correct in saying that students may prefer schools for reasons that have nothing to do with quality. In fact, the authors of the paper explicitly state on p.1 that their model is unlikely to be identical to quality as a free-standing concept.</p>

<p>
[quote]
I’m not attacking the paper, but the whole idea of revealed preferences to begin with. Preferences are thoughts, mental states, beliefs, that can not be observed, let alone studied objectively. It’s like if a child’s mother asks him if he wants a candy bar or a banana for a snack, if he chooses a candy bar, this does not necessarily means he prefers candy bars over bananas, in fact, the truth could be the exact opposite, and such an assertion can not be made on those facts alone.

[/quote]
</p>

<p>I have already said, and more importantly, the authors have said, that the paper is a simple analysis of revealed preferences, nothing more, nothing less. The authors freely concede that their model is unlikely to be identical to 'quality' as a free-standing concept, because like you said and I said, sometimes people prefer things that are bad for them. I sometimes prefer to pig out on a bag of chips rather than eat a nutritious meal. </p>

<p>
[quote]
Again, most college “preferences” are done at the application stage, not at the matriculation stage.</p>

<p>The authors of the paper say:</p>

<p>“When a student makes his matriculation decision among colleges that have admitted him, he chooses which college "wins" in head-to-head competition.”</p>

<p>-This completely ignores the fact that many did not apply to college X to begin with, and have thus already “preferred” not to attend it. The only way this study could have any semblance of actuality is if all the people surveyed applied and subsequently were admitted to every school on the list.

[/quote]
</p>

<p>THIS, however, is not a valid objection. The paper addresses this point head-on in, what I think, to be a highly satisfactory manner. The paper's model DOES NOT rely solely rely on having everybody apply to every school. What you are talking about is the concept of self-selection in that it is obviously true that nobody everybody applies to every school.</p>

<p>Let me put it to you this way. If everybody really did apply to every school and then matriculated at the 'best' school that admitted them, then this would be a very simple paper indeed. In fact, it wouldn't even really be a paper at all, because it would have no analysis. All you would have to do is get the statistical data and present it. You wouldn't need to model anything, because, like I said, everybody has applied to every school, and then matriculated at the 'best' school. </p>

<p>The WHOLE VALUE ADD of the paper is captured within Section III, and especially in equations 3-6 and the simulation model of Section III.5 These parts of the paper address the very point you are raising. It is of course true that not everybody applies to every school and that applications are therefore necessarily self-selected. The whole point of the model (and in fact, the whole point of the paper) is to deal with that fact by stacking comparison vectors against each other to come up with table 4. Hence, what you are objecting to is not really an objection. Rather, it's actually the whole point of the authors in writing the paper. </p>

<p>Guys, I don't want to sound frustrated, but please, all I ask is that you read and understand the whole paper before you object to it. Many of the objections that y'all have raised were addressed in the paper.</p>

<p>
[quote]
With regards to Revealed Preferences, check out Simpson's Paradox.

[/quote]
</p>

<p>I agree that there may be confounding variables. However, I see no reason to believe that whatever confounding variables there are will adversely affect one set of schools over another - and it is a relative school comparison that is what is important. </p>

<p>If such confounding variables exist, then I am sure that other researchers will discover them in due course. However, this is the best paper I know that is available right now, so it is what we ought to rely on. It's like when Einstein showed that the model of Newtonian classical physics model that was in use for hundreds of years was incorrect. The model was wrong, but it was still useful enough to generate important results for centuries, and is still used today as a strong first approximation. </p>

<p>
[quote]
But, the main flaw in the RP is the statistical bias to begin with -- check out the footnotes.

[/quote]
</p>

<p>The paper uses statistical techniques that are well within the mainstream and have not elicited any serious academic objection either by the econometrics community. </p>

<p>Look, every statistics study out there is subject to statistical bias. In fact, the whole study of statistics is the study of uncertainty. What is important in the field of statistics is not that you have removed the bias (which is impossible short of taking every single sample) but that you have used generally accepted methods of containing the bias. I believe the authors have done so.</p>

<p>The objections were raised in the paper because they are traditional objections to revealed preference theory itself. The authors can claim to have solved the issue by adding a few more equations here and there, but the fact still remains that many, me included, find revealed preferences to be unfounded. They are using flawed logic to explain flawed logic, and I’m not buying into it.</p>

<p>“The paper uses statistical techniques that are well within the mainstream and have not elicited any serious academic objection either by the econometrics community.”</p>

<p>-If the paper is not yet published, how many academics are going to begin attacking it? I guarantee that after it is published, there will be criticisms.</p>

<p>
[quote]
If the paper is not yet published, how many academics are going to begin attacking it? I guarantee that after it is published, there will be criticisms

[/quote]
</p>

<p>The paper hasn't been published OFFICIALLY. But it has been a freely available NBER working document for a long time now, and trust me, many academics are familiar with it. It has drawn considerable interest in the academic community, as many NBER working papers do. It has also been cited rather profusely (especially for a working paper). Many other NBER working papers have been attacked for their flaws, in many cases, so much so that they have effectively been withdrawn. Not this one, even though it has been a freely available and now cited document for awhile now. </p>

<p>
[quote]
The objections were raised in the paper because they are traditional objections to revealed preference theory itself. The authors can claim to have solved the issue by adding a few more equations here and there, but the fact still remains that many, me included, find revealed preferences to be unfounded. They are using flawed logic to explain flawed logic, and I’m not buying into it.

[/quote]
</p>

<p>Now, this, I admit this could be a valid objection. If you just don't believe in the entire theory of revealed preferences, then I agree that the paper falls down. However, I would say that revealed preferences is a widely accepted technique within the economics community with considerable research backing. To doubt revealed preference theory is to doubt one of the mainstream tenets of economics. You are free to question revealed preference theory in general, but understand that doing so would be well outside of the mainstream.</p>

<p>kk,</p>

<p>I for one would love to hear your criticisms of revealed preferences methodology. </p>

<p>What do you find to be the short comings? </p>

<p>Have you seen its inferences disproved empirically? Or are your objections on theoretical grounds alone?</p>

<p>Unfortunately, the current version of the study that is online lacks several of the appendixes that I remember reading last year. Thus, I cannot point to the specific bias to which I saw. Thus, I must withdraw my objection.</p>

<p>In a worthy revealed preference study, every comparable level school that an applicant DIDN't apply to should have been dinged in the data entry for that applicant.</p>

<p>After all, he/she didn't even think enough of that school to even bother applying there. So such school was actually preferred even less by that individual than the schools that at least did merit an application.</p>

<p>I don't think they did this in the given study.</p>

<p>GIGO.</p>

<p>I would imagine special schools, such as some schools considered somewhat religiously-affiliated, and technical schools, would be over-valued due to this methodology flaw. To pick one, Notre Dame may win a certain level of cross admits among people who actually applied there, but for one thing Jewish students are probably proportionally less likely to apply there, and really don't want to go there. if they got in they wouldn't go there. But you'll never see this cross-admit datum, because they won't even bother to apply to Notre Dame.This type of non-preference would not have been picked up in the study, since they only looked at the results of people who thought enough of Notre Dame to apply to it. If they also evaluated the real preferences of people who didn't apply to a Notre Dame, I think such a study would show that such a school is actually much less preferred than the given study found.</p>

<p>This example seems fairly evident to me, but I'm sure there are other "false rankings" due to failure to ding the schools that weren't even applied to.</p>

<p>The most important preferences are expressed at the application stage, and not the selection stage. This is the stage where the data, perhaps via survey, is best studied to accurately guage real preferences. IMO. The size of financial aid awards can also distort more idealistic preferences, at the final selection stage, which may also introduce distortion depending on what one is trying to prove.</p>

<p>Regional/ location biases also probably enter into these preferences, in a way that makes the conclusions ambiguous. If most applicants are from the east, and they'd rather go to a comparable east coast school than, say Northwestern if they get into both, then Northwestern loses the cross-admit battle, for reasons that really don't reflect that poorly on Northwestern. But at least that east coast student thought enough of Northwestern to apply to it. Many other out-of-region schools didn't even merit an application from that same student. But Northwestern is the only midwestern school "dinged", due to the cross-admit loss, even though it is in fact preferred by that student to the other midwestern schools that he/she chose not to even apply to.</p>

<p>GIGO. IMO. YMMV.</p>

<p>I'm still in the mindset that the moethodology is inaccurate because it doesn't give head-to-head cross admit data...for example, if many of school A's applicants are cross-admitted with HYPSM, but many of school B's applicants aren't, school B will have more luck getting those students to attend even though the accepted students are weaker.</p>

<p>To see if this is in effect, make a list of how strong student bodies are and what their acceptance rate/yield is. </p>

<p>Also, I think the WSJ feeder ranking is very useful, just like the averaged USNWR, but is there any sort of ranking of getting students into top grad schools?</p>

<p>The WSj feeder ranking only covers a subset of top programs, biased towards the east coast, and only for a single year. I would expect the results to vary from year to year, particularly for the smaller schools/LACs, depending on the particualr merits of the applicants of a particular school in a particular year. It was the same in my kid's high school- the class of 2005 did a lot worse overall in college admissions than the class of 2004. I think this data would be a lot more telling if it covered maybe the last 5 years, and included more top professional programs.</p>

<p>Particularly for the smaller schools.</p>

<p>thought and mony,</p>

<p>With all due respect, did you read the RP study? Most of the questions you raised are carefully addressed in the methodology. Thought, the entire study is derived from cross admit data. Rather than not taking this into account, this is exactly what they looked at. Mony, they considered the effects of financial aid awards and regional effects. Schools are not "dinged" for anything at all. You seem to be objecting to the study based on some assumed methodology completely different from what the authors did. There was also a careful consideration of the fact that some schools (like ND and Caltech) attract applications only from a certain subset of high school students. For many able students either or both would not even be in the universe of consideration. To be included a school hadd to have enough cross admit data in their data base to analyze. Schools that attract only a limited set of applicants have relatively little cross admit data. This increases the noise of their estimated appeal, and at a threshhold leads them to be dropped from the study due to insufficient data.</p>

<p>No study is perfect, and this includes the RP, but if you have criticisms, at least base them on the study they did.</p>

<p>Afan, thats not what I meant.</p>

<p>I explained that it doesn't show school versus school, so that if one school is very low, its not because no one wanted to go there as much as the nexy school, but the applicants were accepted into top schools and chose them over it.</p>

<p>For example, many students who get accepted to, say, Cornell also get accepted to MIT. They choose MIT. However, students at, say, JHU all get rejected from MIT, so they go to JHU. It doesn't mean that JHU is stronger than Cornell, just that the applicants have more range of choice. Do you understand what I mean?</p>

<p>That is why looking at matriculation rates is not useful. Instead, look at head-to-head decision making to see which schools students choose over others. That way, you can actually see which schools are generally preferred over others.</p>