<p>@Coriander23 Why is Wesleyan over Middlebury surprising? Are you just looking at US News as a comparison? Wesleyan has been higher than Middlebury in “the rankings” for most of its history, except for Middlebury’s recent surge up the US News ladder in the past decade or so, and the stats of the schools are very similar. Same average standardized test scores, graduation rates, etc. US News emphasizes financial resources more, so it’s no surprise that Middlebury’s better endowment/student spending puts it higher than Wesleyan in that ranking. Where does Middlebury have an advantage over Wesleyan? Even including Amherst, Williams, and Swarthmore, the other top LACs like Bowdoin, Middlebury, Wesleyan, Vassar, Hamilton, Haverford, etc. are all pretty comparable. The various rankings tease out minutiae, but there isn’t much difference in terms of the education offered between the top schools.</p>
<p>@ormdad
Here’s a detailed breakdown of how each college and university faired in the subcategories:
<a href=“CollegeLifeHelper.com - Helping College Students Online!”>CollegeLifeHelper.com - Helping College Students Online!;
<p>@smartalic34
</p>
<p>It’s clear that whatever Wesleyan’s financial difficulties may have been in the past, it’s been able to “stay in the game” mainly due to its amazing ability to attract and produce people with certain leadership qualities. In the “American Leaders” subcategory there’s a tight group of six LACs at the top followed by a big drop after Wesleyan:</p>
<p>Swarthmore - 5
Amherst - 6
Williams - 13
Oberlin - 15
Barnard - 16
Wesleyan - 17</p>
<p>Wellesley - 27
Pomona - 30
Vassar - 31
Smith - 32
Haverford - 33</p>
<p>Then another big drop</p>
<p>Bowdoin - 41
Colgate - 47
Reed - 46
Carlton - 50</p>
<p>Then another big drop</p>
<p>Middlebury - 77
Colby - 89
Bates - 154</p>
<p>Of course, there is always the possibility that this is partially based on class and gender. Of the top six LACs, three (Wesleyan, Williams, and Amherst) had long histories as all-men’s colleges. However there is evidence to support the fact that this is not just a matter of sitting on past laurels. The 2014 Time Magazine alumni ranking limits its outputs to “living alumni” with current Wikipedia entries and the results there are similar (although I did spot a few recently departed Wesleyan alum in the mix,)
<a href=“TIME's Most Influential Schools, Ranked By Alumni | TIME”>TIME's Most Influential Schools, Ranked By Alumni | TIME;
<p>One problem with ranking schools in terms of alumni achievement is that, just like the stock market, “past performance is not a guarantee of future results.” Universities such as Vanderbilt that have more recently transformed are not going to have the alumni depth of long-time heavy hitters. So when it comes to assessing what the classroom experience will be like, as well as the future value of the diploma, I would judge Vanderbilt and other “recently arrived” schools (such as Wash U) more by the caliber of their current students than their alumni. Likewise, for schools that have become less selective over time.</p>
<p>@fondmemories:</p>
<p>Valid point for the “American Leaders” category.</p>
<p>Not such a valid point for student awards, PhD production, or percentage going to an elite professional school (WSJ ranked by that once), which is dependent solely on performance of recent alumni and where the data used doesn’t go back more than one or two decades (when WashU has been where it is now in the USN rankings).</p>
<p>If Forbes is not giving heavy weight to faculty/academic quality, it’s just another crummy ranking completely missing the point of education, IMO. USN underweights faculty strength too.</p>
<p>UWUR does not.</p>
<p>
I disagree about the importance of PhD production. Getting a PhD is hardly a holy grail of education! The vast majority of jobs don’t require a PhD, and PhD programs are a waste of time and energy for most students. </p>
<p>Given the exceedingly poor job prospects of many PhDs, one suspects high PhD production is symptomatic of shoddy advising as least as much as quality academics. Articles like “Graduate school in the Humanities: Just don’t go” have become infamous in academic circles for pointing out the hazards of leaping into graduate school without thinking through the consequences. </p>
<p>Interesting that there has been so much complaint on this site about the use of USN ranking. But is it so bad to have a different methodology that can in some ways be merged with others. Are we also subject to prestige over substance and when we see a school that doesn’t match our expectations we tend to question the MO of the study. There is no way to legitimately rank schools since like the admission process there are so many variables that are evaluated subjectively. Example if the starting salary is higher for a school like Cornell than Duke how is location of alumni factored in when comparing the northeast to the south. It is hard to deny that there are a certain group of schools that do well regardless of the source for the rankings. College is still an individualistic choice and what one person finds as the best may not fit the interpretation of another. Nice thing is that there are a lot of great schools out there. </p>
<p>@warblersrule:</p>
<p>True about a humanities PhD (though you can say the same thing about law school). An analytical or quantitative PhD can give you a leg up in industry. I have noticed that it is very difficult to get tenure these days. It seems that, in the analytical disciplines, those folks who try to make a go of it in academia have a much tougher time than those folks who go in to industry. With a analytical or quantitative PhD, if you have some business savvy and soft skills, you can go very far. McKinsey has a ton of PhDs, for instance.</p>
<p>I swear that the names for Forbes are just drawn out of a hat each year. USNWR, on the other hand, is more of a prestige dropping. </p>
<p>
</p>
<p>Most of the Mudd PhD’s are in engineering or computer science, I think. They are hardly going to have “poor job prospects” or be underemployed.</p>
<p>9 of the top 10 in the Forbes list are the same as last year, so I’m not sure what you’re talking about, @bradybest (Amherst replaced Columbia).</p>
<p>Let me just show two examples that would expose the flaws of this college ranking system.</p>
<p>UChicago was ranked at #4 in 2012; it was ranked at #14 in 2013 and now it’s ranked at #24 this year. Go figure.
Pomona College was ranked #9 in 2012; last year it was ranked #2 and now it’s ranked at #8 this year. </p>
<p>It is just not credible to move the rank of a university or college in a ranking system so drastically from one year to another. It’s just ludicrous, to be honest.</p>
<p>Not really. If you look at the statistical data compiled, the difference between being ranked 1 and being ranked 30 is very little (93 vs 85- <a href=“http://centerforcollegeaffordability.org/uploads/Published-Final-650.pdf”>CollegeLifeHelper.com). Retention rates/graduation rates/debt default rates change each year, so some change is to be expected. Furthermore, Forbes has changed its methodology over the years, now relying less on RMP and using their own compiled list of America’s Leaders instead of Who’s Who. I think it would be better to argue for the reputability of the current methodology (which do change) instead of looking for some sort of year-to-year consistency, which wasn’t the goal to begin with. And even then, the list is quite consistent- almost all of the top 50 schools were ranked in the top 50 last year too, and as has been pointed out, the top 10 last year is almost the same list of schools as this year’s top 10. Minute differences like 2 vs 9 and 14 vs 24 aren’t that important.</p>
<p>‘Getting a PhD is hardly a holy grail of education! The vast majority of jobs don’t require a PhD, and PhD programs are a waste of time and energy for most students.’</p>
<p>If you want to become a scientist, you have to get a PhD. </p>
<p><a href=“Doctorates Awarded :: Institutional Effectiveness, Research & Assessment :: Swarthmore College”>http://www.swarthmore.edu/institutional-research/doctorates-awarded.xml</a></p>
<p>The science and engineering part is relevant. Reed, for instance, has a PhD production rate of 20.6%, and a science and engineering rate of 14.2%. So only about 6.4% go on to get PhDs in other stuff (what you call the product of ‘bad advising’, I see as a desire to do what one ‘loves’). PhD production rates in science and engineering for HMC and CalTech are consistent with their overall rates.</p>
<p>Unless you consider science PhDs to be a waste of time??</p>
<p>It’s amazing that Bates went from 110-ish to 60 this year, though.</p>
<p>
Intparent, I didn’t mean to imply that all PhDs were worthless or that they lead to underemployment. I’m almost finished with my own PhD, so let’s hope not. </p>
<p>Mudd is an outlier in its enormous per capita production of PhDs, along with the similar Caltech, but let’s use it with an outlier of my own. Emerson specializes almost exclusively in communication and the arts. How many of those students eventually get PhDs?* How many relative to Harvey Mudd? There is obviously a problem applying the same criterion to such wholly disparate schools. </p>
<p>The PhD production rate isn’t alone in being a clumsy measure of student output. There is a similar problem with the “notable people” ranking criterion. Schools that focus almost exclusively on STEM fields are less likely to produce large numbers of notable politicians and actors. </p>
<p>*For a reference point, the US awarded 93,956 BAs and 1,646 PhDs in the arts in 2011. In communications, the numbers were 83,274 and 577, respectively.</p>
<p>
Not at all! I noted that PhDs are not necessary for most students. Not “most science majors” or “most students who want to be scientists.” Most undergraduates. Unless you want to argue that most students do, in fact, choose to pursue PhDs, or that those wishing to become scientists make up the majority of undergraduates, your point is neither contradictory nor relevant to my claim. </p>
<p>Incidentally, the majority of STEM majors do not pursue PhDs. More get PhDs than students in certain other fields like the arts and communications (see above), sure, but nevertheless most do not pursue PhDs. Below are some of the numbers of BS and PhD degrees, respectively, awarded in 2011.
[ul][<em>]Biology: 90,003 / 7,693
[</em>]Engineering: 76,376 / 8,369
[<em>]Math: 17,182 / 1,586
[</em>]Physical sciences: 24,712 / 5,295[/ul]
Most of these BS graduates are not trying to get into PhD programs. They pursue jobs that don’t require a PhD and/or pursue other advanced degrees. For example, biology majors go into graduate programs in medicine, veterinary medicine, dentistry, public health, nursing, and pharmacy, among others. </p>
<p>Given that this is only one tiny portion of the ranking system (2.5%), it’s not a huge weakness in the methodology. When you start adding other, even more dubious measures like Rate My Professors (a whopping 10%!), however, a significant percentage of the ranking starts looking rather questionable. </p>
<p>The reality is that no ranking is free of issues. Most rankings of colleges (Forbes, Washington Monthly, USNWR, etc.) attempt to force very different institutions into a one-size-fits-all ranking. That’s not terribly useful to anyone, particularly high school students. </p>
<p>@warblersrule
</p>
<p>Sorry, but, I had to LOL at that one. I’ve been following CC for a long time and I can’t recall the last time I read the acronym STEM when it wasn’t in the same sentence as “holy grail”. Nevertheless, MIT does pretty well in the AL subcategory, placing seventh (between Dartmouth and Amherst) and Caltech is #9 (between Dartmouth and Cornell.) The biggest mystery continues to be Harvey Mudd which doesn’t place well in most of the Ph.D lists I’ve see (even STEM Ph.Ds) and placed #294 in the Forbes “American Leaders” subcategory. Granted, it’s a tiny college but many of these systems are based on per capita output, not raw numbers. So, what do most of their graduates do afer college?</p>
<p>@circuitrider
Actually, the Forbes ranking system does Mudd the huge favor of making its system per capita based. There, Mudd places #2 for Phd production.</p>
<p>Of all available outcome metrics, the one I like best (despite its problems) is PhD production, for the following reasons:
- It measures an outcome that is directly related to the academic subjects colleges actually teach.
- It measures an outcome that is plausibly related to the treatment effects of undergraduate education. Colleges with the highest PhD production rates are not necessarily the very most selective colleges. Colleges with the highest rates do seem to share certain features in common (such as small average class sizes and a reputation for academic rigor).
- It measures an outcome that, in most cases, occurs within 10 years of college graduation (unlike lifetime achievements such as senior leadership positions or Nobel prizes, which presumably reflect the effects of many factors besides undergraduate academic quality)
- It can be measured either for broad interdisciplinary areas (e.g. science and math), or in many cases for individual disciplines
- It measures a population that is not very sparse (unlike Nobel or Rhodes production data, for example)
- The NSF has collected many years of data and made it searchable on the internet at webcasar.com.
- It can be expressed easily as a number that has clear, objective meaning (unlike, say, “faculty quality”, “student engagement”, or “administrative effectiveness” – all of which are admittedly very important)</p>
<p>Problems with this metric:
- The NSF reports baccalaureate origins of PhDs in absolute numbers, without normalizing either for institution size or program size. Consumers/researchers who make these adjustments can’t cite authoritative sources and methods for their adjustments.
- Although the outcomes can be attributed plausibly to treatment effects, these effects cannot easily be separated from confounding selection effects of student choices or college admission selectivity. An excellent college may have low production rates simply because many of its students choose other career paths. It may enroll many students in engineering, nursing, or other fields that do not frequently lead to doctoral degrees. Socioeconomic selection effects may be confounded with academic treatment effects.
- The percentage of PhDs earned in strong v. weak programs cannot easily be measured and compared across colleges.
- It isn’t an outcome that many people strongly desire for themselves or their children.</p>
<p>It is possible to control for program size and student selectivity. I’ve done this for a limited number of schools. The ones that seem to do best are schools with small average class sizes, reputations for academic rigor (sometimes backed up by thesis or comprehensive exam requirements, core course requirements, grade deflation, etc.), and understated Greek and sports programs. These include some of the Colleges That Change Lives, technical institutes, liberal arts colleges in general, and (lagging behind the others) some of the Ivies. </p>
<p>What scares me most about this list is that anything under 40k has started to look like a bargain!</p>