Dartmouth vs. Brown vs. UPenn

<p>Penn. I’m a city-slicker for starters, and I have persaonlly benefitted enormously from Penn’s alumni network (Dartmouth’s is wonderfully loyal, but ultimately limited by its size in comparison to a school with more than double the undergrads and more than 5x the grads…</p>

<p>@siserune:
To quote myself: “There isn’t a whole lot of difference between the three [Brown, Dartmouth, Penn] in terms of prestige.”
Or, I may add, in selectivity. </p>

<p>I agree in spirit with your quibble about the algorithmical “Revealed Preference Ranking” that put Brown five places above Penn. And in any case, Ivy League selectivity fashion year to year is changeable – sub-Big Three. </p>

<p>On the other hand, your statement that “The Revealed Preferences study was not a study of selectivity” is semantically disingenuous. The first sentence of the NBER study is: “In this study, we show how to construct a ranking of U.S. undergraduate programs based on students’ revealed preferences – that is, the colleges students prefer when they can choose among them.” </p>

<p>The NBER study is in fact an explicit project for promoting undergraduate selectivity above the hotch-potch of U.S. News statistics, which are catch-as-catch-can. </p>

<p>(This year, for example, U.S. News’s selectivity column goes completely off the rails, with, e.g., Stanford and Brown ranked below Wash U, despite Stanford and Brown’s firm residence in the top-most coast-to-coast applicant pool (Stanford’s pool comprises the Big Three, Brown, MIT, Columbia, and Duke), while Wash U’s pool overlaps are parochial Midwest and Wash U occurs in none of the most selective schools’ pools. None. </p>

<p>I have to say that the “Revealed Preference Rankings” is, in the end, an actual ranking. It does put Brown five places above Penn and not the other way round. </p>

<p>My own experience in twenty-odd years of college placement consulting is that Brown has usually been more desirable than Penn, with the notable exception of a period from roughly 2000 to 2005 – at least among clients in New York City and its suburbs. </p>

<p>On the other hand, Penn’s faculty and graduate departments are inarguably more distinguished than Brown’s, Penn is wealthier than Brown, and it is that wealth that the U.S. News rankings have lately rewarded. Penn rose and Brown fell when U.S. News reweighted away from selectivity and toward wealth in the late 1990s. </p>

<p>Both schools are needlessly defensive about their reputations these days – Brown because it is no longer the “hottest” college in the country, as it was from the late 1970s to the mid 1990s (see a good book called “The College Admissions Mystique” by Bill Mayher); Penn because for decades before the late 1990s it was seen as the vocational Ivy that was comparatively easy to get into. </p>

<p>At my famous prep school in 1970, literally everyone who applied to Penn got in; ditto for Cornell; and, interestingly, ditto for Stanford. </p>

<p>Bottom line: For undergraduate business go to Penn, of course. For a livelier student body go to Brown. For graduate work go to Penn. In my line of work I have been partial to Dartmouth and Brown (and Yale and Princeton) all these years for their superior undergraduate experience.</p>

<p>

A rather useless one at that.</p>

<p>All of this talk about cross-admits and yields and revealed preferences really is a bunch of nonsense. Students who are so easily swayed by the decisions of others have much larger issues to worry about. </p>

<p>

I’d be very surprised if one couldn’t say the say for HYP if your prep school was that good. Yale admitted roughly 30-33% of all applicants in the early 1970s.</p>

<p>This is, of course, quite separate from the fact that Harvard and Brown admitted greater percentages of students back then because they had not yet merged with Radcliffe and Pembroke.</p>

<p>@IBclass06:
The notion that H-Y-P were aristocratic rather than meritocratic 40 years ago has been greatly exaggerated, and is a journalistic cliche. </p>

<p>Again, at my famous prep school in 1970 it was hard to get into H-Y-P, and easier to get into Penn, Cornell, Stanford (and Brown). Dartmouth fell between the two groups in selectivity. Now, all the Ivies are hard to get into except Cornell and Penn for early decision (where – N.B. you prospectives herein – the odds are still quite good). </p>

<p>@siserune:
Addendum to the above: Applicants to Penn’s class of 2013 numbered 22,845 (according to “The Daily Pennsylvanian”), while Brown attracted 24,988. </p>

<p>Penn has been static for three years while Brown has jumped by 7,000 applicants. But as I said above, Ivy selectivity fashion is changeable.</p>

<p>

I didn’t imply anything of the sort. I’ll explain it in a different fashion. </p>

<p>[ul][<em>]HYP were easier to get back into then, with ~30% admit rates.
[</em>]Roughly 16 prep schools have long been massive feeder schools for HYP and do well even now, with ~7% admit rates.
[li]If your prep school was famous, then quite a few students should have been admitted to HYP.[/ul]</p>[/li]
<p>Of course, you could say that even then Harvard was more selective than Brown or Penn. I would agree. That begs the question, however – why worry about 1970s selectivity at all? </p>

<p>It seems to me that the only thing people can find to gripe about Penn is its supposedly nouveau riche status, which is an absurd way of looking at colleges.</p>

<p>

That’s usually what a wise parent thinks. :wink: </p>

<p>

Yeah, it’s weird how some prefer a knockoff. <em>shakes head</em></p>

<p>

</p>

<p>The US News selectivity ranking is an absolute joke. 40% of it is based on the percentage of enrolled students who were in the top ten percent of their hs class. This does not take into account the strength or reputation of the hs in question. </p>

<p>The RP rankings are much more reliable. The schools that can better attract students will end up with better student bodies.</p>

<p>

</p>

<p>The RP rankings show which students choose which schools. NOT WHY!!!</p>

<p>Schools that win cross-admit battles will end up with stronger students (i.e. those who were accepted and could’ve gone elsewhere).</p>

<p>

<em>snorts</em></p>

<p>The RP list has absolutely zero usefulness as a measure of selectivity. By your reasoning, you would claim that</p>

<p>– Notre Dame is more selective than Duke
– Wellesley is more selective than Pomona
– Illinois is more selective than Haverford
– Maryland is more selective than WUStL
– Furman is more selective than Carleton
– Arizona State (90% admit rate!) is more selective than Hamilton</p>

<p>Pure and utter nonsense.</p>

<p>This is in addition to RP’s obsolescence. The survey was done 10 years ago. Of course, one is hardly surprised supporters of certain schools continue flaunting it (sadly impressing no one).</p>

<p>The RP rankings measure “popularity,” which while correlated with selectivity is not necessarily the same. But it does tell us about the PREFERENCES of hs students. Hence its name.</p>

<p>

</p>

<p>It was published as a working paper in December 2005, and it was last revised in September 2006.</p>

<p>[SSRN-A</a> Revealed Preference Ranking of U.S. Colleges and Universities by Christopher Avery, Mark Glickman, Caroline Hoxby, Andrew Metrick](<a href=“http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601105]SSRN-A”>http://papers.ssrn.com/sol3/papers.cfm?abstract_id=601105)</p>

<p>The desirability of a school does say something about the caliber of the students who apply and ultimately decide to enroll there.</p>

<p>“Yale admitted roughly 30-33% of all applicants in the early 1970s.”</p>

<p>My data is circa 1970, Yale accepted 17% which was the most selective college in the nation that year. Source: Cass & Birnbaum college guide, circa 1971.</p>

<p>As for the other, you can only pick a school in a cross admit battle if you thought enough of both of them to bother even applying to each of them in the first place. In preference to all the other schools you disliked more than either of them, so you didn’t even bother applying. An important nuance not captured; the cross-admit loser may still be a winner compared to the others left out altogether. The pool of applicants to a particular college is a biased sample of the underlying applicant population, not an unbiased sample. Go Notre Dame !!! Go BYU !!!</p>

<p>But even if school A is popular to more students than school b, that doesn’t mean that their thought process equally applies to you. Maybe more applicants live within 4 hours of school A, but you actually live closer to school B. Or most applicants don’t want to study Geography, but you do. Or School A gives better financial aid, but you wouldn’t qualify anyway. What matters is popularity to you, by your own criteria.</p>

<p>

That’s surprising if true, since Yale’s admit rate in 1976 was 27% and steadily decreased to 17% in 1987. (Not that I doubt you. I just simply find it odd.)</p>

<p>I looked through Yale’s reports that go back to the turn of the century, but unfortunately admit rates were not recorded. I did find it interesting that more current students at Yale attended prep schools (55%) than in 1970 (41%). The percentage of legacies (14%) is also slightly higher than in 1970 (10%).</p>

<p>“That’s surprising if true, since Yale’s admit rate in 1976 was 27% and steadily decreased to 17% in 1987.”</p>

<p>Go check for yourself, take out the Cass & Birnbaum book from your library, inter-library loan.</p>

<p>I used that book myself to apply to colleges,and I took it out again several years ago, to compare with US News, when D1 told me my frame of reference of colleges was all completely obsolete.</p>

<p>The dynamics of the baby boom was such that admissions was more competitive in the early 70s than in the mid-70s. </p>

<p>Get the 1970 and /or1971 editions. Please report your findings here after you do that.</p>

<p>

</p>

<p>It’s the essence of what the study did and didn’t find, not a quibble. It isn’t cross-admit data all all; the rankings correlate negatively with preference in important ways; and one of the main findings that can be read from the study if you understand the math was that it can’t linearly rank anything except at a very low resolution. The only major ranking output was the dominance of the top 5-6 schools, and even that has a huge caveat, because that they couldn’t quite determine whether Caltech should be on that list, while at the same time not being able to figure out whether it beats Harvard for the national championship. That both problems coexist indicates that this method has huge trouble producing a credible linear ranking, which may simply reflect the heterogeneity of the preferences. People deciding on Caltech versus Stanford and Harvard don’t share the preferences of those comparing Princeton versus Amherst, and the panoply of all such values may or may not stitch together into an overall ability to rank. </p>

<p>

</p>

<p>You’re just wrong. Demonstrably, unequivocally, and insultingly wrong. The Revealed Preference study makes a point of distinguishing selectivity (an attribute of college behavior) from the concept you incorrectly conflate it with, desirability (an attribute of student opinion). </p>

<p>The words SELECTIVE and SELECTIVITY are used unambiguously in the Revealed Preference study. They refer exclusively to stringency of the selection applied by a given college. More specifically, these words are used to indicate *what level of applicants are accepted<a href=“p.9,%20p.10,%20p.23,%20p.49”>/i</a> and (quoting the term as used by an admissions officer) *what is the admissions rate per application<a href=“p.9”>/i</a>. </p>

<p>Even more particularly, on pages 8-9 the authors implicitly define selectivity as “[the degree of] stringent criteria [that a college] actually applies”, and explicitly contrast this with its “real desirability” (what students prefer, when given the choice) and the misleading (because manipulated) nature of the raw admission rate as an index of either selectivity or desirability. On p.49 they again contrast admission rate with “real selectivity”. Clearly, wherever they write selectivity they mean stringency of the selection. </p>

<p>In short, I and the study authors conform to the standard usage of “selectivity”. You don’t, and call it a disingenuous semantic game to insist on the rather large difference between a measure of colleges’ admissions practices and a measure of students’ preferences. That’s amazing, and not in a good way.</p>

<p>

</p>

<p>NBER and US News agree on the meaning and usage of “selectivity”: how difficult is it to get in, what type of applicant gets in, how exacting are the colleges in allocating the admissions. US News’ selectivity column is an attempt to quantify the concept, not redefine it. </p>

<p>

</p>

<p>Anything with numbers is a ranking.
Anything that also correlates well with other metrics (endowments, research grants, grad school placements, alumni salaries) has a claim to be an “actual” ranking. Revealed Preference is nothing special in this respect, and I see no reason to prefer it over objective measures based on verifiable data.</p>

<p>

</p>

<p>The RP surely does not measure popularity. It doesn’t analyze which schools get more applications, and in most cases additional applications will lower a school’s RP ranking, punishing it for the extra popularity. </p>

<p>

</p>

<p>Caltech was number 2 and nearly beat Harvard (the model assigned only 70 percent confidence to Harvard winning). What that tells us about student preferences is that they don’t correlate so well with the RP rankings. </p>

<p>

</p>

<p>RP doesn’t tell us anything about the patterns of who chose what. It’s a very indirect aggregation of invisible data.</p>

<p>

</p>

<p>Caltech > MIT > Harvard in average student ability, which is the very opposite of what happens in cross-admit battles (H beats M; H and M both beat C by more than 3 to 1). </p>

<p>

</p>

<p>In that case, why not rank by SAT scores, National Merit Scholar enrollment or other easily available data? What is the big deal about the not exactly workable RP rankings?</p>

<p>

</p>

<p>By “strengths,” I am including non-academic talents and experiences as well (i.e. leadership skills, volunteering, athletic or artistic abilities, etc.)</p>

<p>Yes, I know Caltech students have higher average SAT scores than Harvard students. But Harvard students bring more to the table, which make them “stronger students” in my book. Only if you have a very narrow sense of “student ability” would you find inconsistencies in the cross-admit battles.</p>

<p>

</p>

<p>As I alluded to earlier, SAT scores and other quantifiable academic data do not fully capture what make up “better student bodies.” The RP rankings do that, IMHO.</p>

<p>In the end, you seem to disregard the RP rankings for their relative lack of statistical rigor. But I find that even so, there can be meaningful conclusions to be drawn from them: at least from an intuitive sense. They clearly show that HYPSMC are a cut above the rest, which is in keeping with our understanding of the hierarchy in higher education.</p>

<p>“They clearly show that HYPSMC are a cut above the rest, which is in keeping with our understanding of the hierarchy in higher education.” Caltech should be taken out of HYPSMC because it is not a well rounded university but more of a specialty school.</p>

<p>

</p>

<p>I concur. Plus it wastes a precious slot in the USNWR rankings ;)</p>

<p>Whether Caltech is a “well rounded university” or a “specialty school” is completely irrelevant to this discussion. We’re talking about which schools win the most cross-admit battles and attract the strongest students. By these measures, Caltech is superior to any school that is not HYPSM.</p>