Why is Cornell the most popular Ivy?

<p>The stats IBClass posted totally ignore yield. Yield is why Harvard can lay claim to being the most popular Ivy(and why Penn can claim to be more popular than Cornell, and so on…)</p>

<p>thanks for the link, 45 P. but does that really reflect the actual winners and losers of the cross admit battles amongst these elite schools? My take on the survey is that, other conditions weren’t factored in yet. but, yeah, the result was indicative of school desirability and attractiveness.</p>

<p>^ As I recall (it’s been a while since I skimmed the actual research paper), the numbers are based on a sampling of a particular group of high school seniors at one point in time, regarding their hypothetical preferences if they were admitted to 2 of the target schools. This is not based on actual decisions made by kids admitted to both schools, and does not account for changes in preferences over time. If you want to go through it, here’s the actual research paper (click on “Download” at the top of the page, and then on one of the download server buttons, to download a .pdf of the entire paper):</p>

<p>[SSRN-A</a> Revealed Preference Ranking of U.S. Colleges and Universities by Christopher Avery, Mark Glickman, Caroline Hoxby, Andrew Metrick](<a href=“http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601105]SSRN-A”>http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601105)</p>

<p>

</p>

<p>The first one is MUCH more accurate internationally. Perhaps in the US brown and dartmouth are a bit more well known, but they unknown internationally.</p>

<p>

</p>

<p>That’s one of the misconceptions sown by the NY Times piece. The central idea of the paper was that actual student decisions are what “reveal preferences”, so that a ranking should be based entirely on those decisions. NYT said, incorrectly, that it was from a survey asking about hypothetical decisions. </p>

<p>

</p>

<p>The study is basely on actual cross-admit decisions (of the schools admitting a given student, which one was chosen), but did not produce any cross-admit data for any pair of schools. Contrary to the false information in the NY Times graphic, the Revealed Preference study does not tell us whether Harvard beat Stanford or Cornell or Nowhere State in the set of students sampled. </p>

<p>The chart consists of pseudo-cross-admit probabilities for the outcomes of hypothetical matriculation battles where financial aid, geography, legacy and other factors are equalized, and the battles are conducted not by real students but by an algorithm selected by the authors. The only role of the real student decisions is to calibrate the parameters of the algorithm. </p>

<p>If that sounds very indirect, that’s because it is. There isn’t a direct relationship between what the Revealed Preference study and the NY Times published, and cross-admit battles among particular schools. At best, the ranking is correlated in some imprecise way with the cross admit rates.</p>

<p>

</p>

<p>Thanks for the clarification.</p>

<p>I must say that I’m shocked, SHOCKED to find that the NY Times got something wrong! :rolleyes:</p>

<p>I don’t know if the Times would correct the web site at this late stage, but I might post an article about this FAQ (Frequently Asserted Questionable) item at some point. It has been cited well over 100 times in CC as “cross-admit data” from an authoritative source.</p>

<p>Indeed, it really is the ONLY source cited for purported cross-admit data.</p>

<p>

</p>

<p>I’m not sure the NY Times got it wrong. Their small print said the data was estimated based on a statistical model based on a survey of hs seniors. </p>

<p>The author(s) may have not been overtly forthcoming that this was not actual numbers, or percentages, of students choosing one school over another. But they did not hide it either. </p>

<p>Reader beware! Read the small print.</p>

<p>^ But the Times’ presentation is inconsistent, confusing and misleading, such as this explanation of the chart:</p>

<p>

</p>

<p>The implication of that larger, more prominent statement is that the percentages in the chart represent actual choices made by students, and not data that “are estimated, based on a statistical model that in turn was based on a survey of 3,200 high school seniors,” as stated in the smaller print at the bottom. As I said, somewhat inconsistent, confusing, and misleading.</p>

<p>The truth is, despite siserunes objections, I don’t really see why looking at a model created to match data from actual cross-admits is wholly inaccurate. Is it precise to the single digit? Probably not, but does it represent a good ball park from a rather large sample size? Yes. Is it fair to remove financial aid, geography, and legacy status through regression analysis? Of course, since the idea is “all things being equal, what to people do.”</p>

<p>That being said, I’m still not sure what measure makes Cornell the most popular except for the number of students actually there. This thread should have ended when IB posted the number of applications per spot.</p>

<p>If you wanted to know international popularity amongst applying students, then you should look at number of international applications per spot. Of course, this will hurt schools like Brown because we are not need-blind for internationals-- but that will clearly effect our popularity.</p>

<p>RML and username, my favorite Brown haters, are defining “popularity” as prestige and name recognition. I think most people in this thread are looking at popularity based on number of people who want to apply each year.</p>

<p>

</p>

<p>They did state, what was what, albeit “confusing and misleading.” Their highlighted box was extrapolating their data, period. </p>

<p>I am not arguing for or supporting how the NY Times reported information. I am just saying that they did not report false information.</p>

<p>It is up to the reader, buyer, etc. to be aware or what they are “buying.”</p>

<p>Further evidence of Cornell’s popularity is in the growth in applications from 2001 to 2008.
From IPEDS</p>

<p>increase in apps, school</p>

<p>16535 Cornell University
8448 Harvard University
7716 Princeton University
6436 Yale University
5181 Columbia University in the City of New York
4027 Brown University
3988 Dartmouth College
3782 University of Pennsylvania</p>

<p>What does that look like as a percentage growth, CH? When are 2009 numbers for IPEDS coming out? Brown I know had a 21% increase from last year to this year (almost doubles the number above).</p>

<p>Cornell University is more popular because it offers 7 schools, 3 of which are state contract schools. New Yorkers tend to apply more to the 3 schools because they serve NY more and offer reduced tuition. More spots + research university = more people!</p>

<p>

</p>

<p>Be sure that the Times got it wrong, including the fine print. They falsely presented the statistical model as attempting to estimate cross-admit data when it was estimating something entirely different: latent desirability. </p>

<p>

</p>

<p>Again, the words “estimate” and “statistical model” are totally misleading, because the NY Times falsely (and quite clearly) states that the numbers being estimated are the ones from real or hypothetical cross-admit battles. </p>

<p>NYT readers know that a national political poll involves some “estimates” and “statistical models” that might crunch the numbers in a more complicated way than just adding up the statewide totals, so if the survey says 55 percent support Obama, that is not necessarily the fraction observed in the poll. However, everybody also understands that the sole purpose of the statistics is to accurately estimate that number. This is a false picture of the model estimates in the NYT chart, but it is the one created by the Times statements about estimates, statistical models, and student decisions.</p>

<p>

</p>

<p>For modestmelody, Brown is the most popular school in the whole universe. I have no further comment.</p>

<p>

</p>

<p>modestmelody, where did you see an objection to using models? </p>

<p>

</p>

<p>How do we know that it’s a good ballpark estimate of cross-admit results? They didn’t publish goodness-of-fit information or anything else that would help evaluate the model, such as the cross admit rates observed in their sample, or the data set itself. This is not a standard ranking problem where you just crunch numbers with a plug-and-play stats application. </p>

<p>The sample size wasn’t large, it was 3200 data points for over 100 parameters (and possibly more like 200 to 300, including all the unlisted schools they say they couldn’t reliably rank). As they admit somewhat in the paper, the data get depleted as you go down the list, with the parameter estimates becoming ever more meaningless. The top wasn’t so good, either; their simulations showed that there was trouble linearizing the ranking within the first 6 schools (Caltech beating Stanford, MIT, Princeton and Yale, and beating Harvard in 30 percent of the simulations), and even more trouble with the next group (where to place Brown, which ranked below number 7 in a majority of simulations but seemed to do better than number-8 Columbia?). They also got odd results when they cut the sample size: Yale beating MIT among science and engineering students!!. </p>

<p>So no, they didn’t have a large sample. That’s why they needed a high-powered statistician, to help the economists squeeze all they could from the data they had. </p>

<p>

</p>

<p>As a model (or rather, a definition) of latent desirability, equalizing is defensible. As a model of cross-admit performance, you want to not equalize in that way, because “a win is a win”; if the sample is representative, it reveals what happens in the true cross-admit game, whether those victories ultimately come from prestige or merit scholarships or whatever else.</p>

<p>“I think the “Revealed Preferences” study’s main flaw is a lot of students self-select…”</p>

<p>Exactly, a mere glance at Notre Dame’s placement (and BYU) clearly highlights this.
The priority for those schools among the sample of applicants who applied to, and chose them would not likely have been shared by many of the people who didn’t even care for those schools enough to even bother applying there.</p>

<p>In other words, the applicants represent a biased sample of the underlying population, not a random sample, therefore caution must be exercised in extrapolating results beyond those individuals.</p>

<p>1,000 Mormons love BYU, apply there and pick it over Harvard. Does that mean anyone who didn’t apply there would make this same priority choice? No. Does RP make such extrapolation ? Yes. Priorities of biased sample (= those who liked it well enough to even apply there) presumed applicable to underlying population, however this is not necessarily the case. and likely isn’t.</p>

<p>Percent growth in applications 2001-2008. I am not sure when the 2009 numbers will become available in IPEDS.</p>

<p>Cornell University 100%
Princeton University 57%
Yale University 50%
Harvard University 45%
Dartmouth College 39%
Columbia University in the City of New York 32%
Brown University 24%
University of Pennsylvania 20%</p>