Forbes Best Colleges 2015

It’s baaaaaack :slight_smile:

http://www.forbes.com/sites/carolinehoward/2015/07/29/americas-top-colleges-2015/

The “Recipe” section is interesting. Retention, student-faculty ratio, grad rate, financial aid/debt, coastal (weird but they note it), age of institution, and acceptance rate. This is not the same as their methodology, which has some weird stuff like “Rate my Professor”, but notes the top ranked schools share these things.

…and here is a piece in defense of rankings, by the “ranker” (UK people can stop laughing now):

http://www.forbes.com/sites/ccap/2015/07/29/confessions-of-a-college-rankings-guru/

Coastal? It’s really the East Coast, and that’s because they have the history and a lot more time to build up wealth and brand names.

The Midwest and entire West have the same number in the top 20 (3 each).

@PurpleTitan - they clarify:

The things I like about the Forbes top colleges list is:

  1. they put all schools together - no separate lists of national and liberal arts colleges.
  2. they at least try to focus on outcomes

I could spend the good part of the morning writing what I don’t like about their list & methodology, but that isn’t worth the effort. Rankings sell magazines/draw eye balls so they are going to be a fixture for the foreseeable future.

They mentioned that Amherst is just a few miles away from Harvard, yeah right. 2 hours.

I don’t put much stock in these rankings, but it’s interesting to see Forbes combine LACs and research universities into a single ranking.

It’s also interesting to disaggregate the Forbes rankings, however, and see how Forbes ranks the various LACs relative to other LACs, and how Forbes ranks the various research universities relative to other research universities. It turns out that when you do that, the rankings for many schools are quite similar to their US News ranking—many colleges and universities are within a few spots in the Forbes rankings of where they are in the US News rankings for their “weight class.” For example, Forbes’ top 4 LACs—Pomona, Williams, Swarthmore, and Amherst, in that order—rank #5, #1, #2, and #3 respectively in the U.S. News rankings. Tweedle-dum and tweedle-dee. Similarly, Forbes’ top research universities—Stanford, Princeton, Yale, and Harvard, in that order—rank #4, #1, #3, and #2 respectively in US News. This pattern extends pretty far down the list, at least among Forbes’ top 100 schools (I didn’t have time to go further). But there are exceptions. Here are some schools that rank either significantly higher or significantly lower in Forbes than in US News:

Research Universities ranking higher in Forbes than in US News:
Wisconsin +14
Maryland +14
Tufts +12
Brown +11
William & Mary +11
Illinois +11
Texas +11
Boston College +10
U Washington +9
Notre Dame +8

Research Universities ranking lower in Forbes than in US News:
Emory -20
Johns Hopkins -18
WUSTL -17
Georgia Tech -11
Vanderbilt -9
USC -9
Chicago -8
Caltech -8
NYU -8

LACs ranking higher in Forbes than in US News:
Reed +51
US Military Academy +19
Wheaton (IL) +15
Barnard +13
Whitman +13
DePauw +13
Bucknell +12
US Air Force Academy +10
Wesleyan +9
Lafayette +8

LACs ranking lower in Forbes than in US News:
U Richmond -21
Bates -18
Harvey Mudd -15
Grinnell -15
Scripps -15
Macalester -12
Hamilton -10
Middlebury -9
Smith -9

Not sure what to make of it. Thoughts?

@bclintonk, just focusing on RU’s, note that Emory, JHU, and WashU are all very pre-med heavy. If many high-stats kids at those schools just go on to become doctors instead of doing great things in various other fields, that would hurt them. Also, some of those underachieving RU’s are known to game the USNews rankings quite heavily while many of those publics do things or have characteristics that would hurt them in USNews.

USNews distributes about 10% of its total points among an amazingly broad smorgasbord of items called, “expenditures per student”. Unlike RUs, LACs don’t have a lot of big ticket budgetary items other than salaries and financial aid. A broad category like “student services” can cover just about anything. Wesleyan, which is about twice as large as its nearest competitors (and thus, doesn’t benefit from efficiency of scale), makes up lost ground in the Forbes poll because because research spending is one of its strong suits.

@circuitrider, you mean that Wesleyan does benefit from efficiency of scale. Though note that Richmond is a big LAC as well.

It’s harder to make sense of the LACs, but note that Reed until very recently was like a minature version of the old UofC, openingly disdaining the USNews rankings game, not that hard to get in to, but tough academically (so many transferred out like at the old UofC) but those who graduated tended to do well.

Isn’t the objective of magazines that want to “compete” against the established one to end up with a list that is essentially similar but contains a couple of twists that appears to make it different, and this despite a methodology that is entirely questionable? In the end, it would take a herculean effort to knock the HYPS and WASP from their pedestal.

Of course, it does not stop certain rankings like the Mother Teresa one to end up focusing on “apparent” student costs and list an academic wasteland such at the UT at El Paso at the same level as Harvard. In this vein, the Vedder outfit is smarter in hiding its trail (despite having a similar objective.) In fact, it is hard to dispute the presence of most schools listed in the first levels. It’s not different from the list presented by Atlantic Monthly a dozen years ago.

As far as finding much logic in the outcome, one could go bonkers trying to 'splain why there are differences. Some might cling to a theory about the research “expenditures per capita”** in the case of Wesleyan or get to a much simpler conclusion: Forbes is simply trading a number of objective metrics that can be verified over the years for some that are simply a hopeless hodgepodge of self reported “data” that is at the level of what Princeton Review relies on. In few words, a bunch of unadulterated horse manure reported as “outcome” were added at the expense of metrics that define the entering class in terms of selectivity.

In the end, the story remains the same. Most people who happen to see their favorite school do well in a particular ranking will applaud the new methodology. And the opposite when the impact goes the other way.

On a personal note, despite “my” favored schools doing extremely well in the wannabe rankings I see few reasons to think they are offering anything superior to the USNews, and this despite the massive flaws of the latter.

At least, this type of rankings are attempting to measure the undergraduate “experience” as opposed to pretend that metrics that are mostly foreign to the UG life (as in faculty measurements and research output in obscure STEM journals) should … mean much to students looking at US … colleges after high school.

** Is that metric not a mere footnote in the budgets of “real” LACs?

A ton of the data that USNews relies on is self-reported as well (by the schools) and thus can be lied about or gamed, such as entering freshmen SAT scores, freshmen acceptance rate, and student-faculty ratio. For example, USC takes 50% as many transfers as they do freshmen. Northeastern games in all sorts of ways (including not counting the test scores of internationals). Being test-optional will almost certainly raise 75/25 test scores. As @bclintonk noted, various schools fudge their student/faculty ratio. So just how fair is it to use those metrics to compare schools when one school games hard and another doesn’t? What do the USC SAT numbers represent anyway, when a big chunk of their student body isn’t represented by those numbers?

Forbes also uses a bunch of silly/useless criteria, which is why I ignore them and look only at their alumni achivement rankings that can’t be gamed (American Leaders, which is a ranking of alums with significant positions or achievements in business, non-profit, government, arts, media, and sciences +student awards + PhDs produced).

As stated there ARE shortcomings in the data used by USNews (and their interpretation) but I am not sure how one could compare the absolute junk compiled by Ratemyprofessor, payscale, and the Princeton Review surveys to the numbers reported by the schools to the CDS organization and the US government.

We all know that schools were caught fibbing about the numbers (most often to internal audits followed by actions by the school) and that MANY continue to report data that has been massaged to produce the necessary impact, and sometimes with the tacit approval of USnews as they maintain nebulous instructions.

However, the massaging of the data that produces the selectivity index is an effort that is mostly futile and sometimes counterproductive as it impacts the expected graduation rate. In a case that garnered much attention in the press (and the expulsion by Forbes) Claremont McKenna offered such an example. Inasmuch as the more favorable numbers “meant” to impress the insiders at the school, they had ZERO impact on the … USNews report as the unaltered numbers would NOT have changed the ranking as the next ranked school was beyond reach. In so many words, the cheating was neither warranted nor effective. Fwiw, it takes a modicum of understanding of the USNews methodology to appreciate such details. Something well beyond the press and the occasional observer or the agenda-laden fanboys of a competing school.

In the end, those rankings are what they are. The ranking is hardly helpful, but the same cannot be said about the underlying data (when disclosed) and represents a good value for anyone who is prepared to scratch the surface and recognizes what might be important to their own case. Some might like to focus on the “prestigious” elements. Others might pay attention to the sketchy “People magazine” popularity surveys (passed as the PA survey) and others might simply look at the big picture to fuel the cocktail hour bragging scene.

One of the easiest rankings to game would be rate my professor. As an experiment I got on RMP and submitted two rankings for the same school, chosen at random. I submitted rankings at the average for those already submitted by real students but I suspect I could have skewed the results in either the positive or negative direction as many of the current rankings are all 5’s and one is all 1’s. Both of my reviews were approved and posted. I didn’t need to log in, provide any proof I was a student, or even give a screen name. The school for which I submitted fake rankings had very few reviews so had I chosen to I could have singlehandedly changed the overall RMP rating for the college. I was even able to “like” my own rating repeatedly.

@xiggi

Ten million dollars a year represents about 5% of the operating budget at Wesleyan. By comparison, I think it would be fair to say that at a typical RU the figure would be closer to 25% of the annual operating budget.

Circuitrider, what would your conclusions be after reading these two reports:

https://www.cmc.edu/sites/default/files/treasurer/annual-reports/2013-2014-CMC-Annual-Financial-Report.pdf
Revenues Government and foundation grants 910
Expenses Research 8,598
Total expenses 97.562

http://www.wesleyan.edu/finance/annualreporting/2013-2014.pdf
Revenues Government and foundation grants 7,763
Expenses Research 7,753
Total expenses 184,655

How does that spending per capita translate at the Forbes level? Does it even show? The devil is in the details.

Hint: Here’s what Vedder uses: http://centerforcollegeaffordability.org/wp-content/uploads/2015_Methodology-Final1.pdf

@xiggi

I’d still say those research figures are more than mere “footnotes” relative to the size of each of those operating budgets. I would point out, however, that the figure for CMC’s research budget should really be $8,298 and is from the 2014 fiscal year while its total expenses figure for that year should be $100,715. It still makes for a hefty 8% of the CMC budget devoted to some sort of research. I guess I’d be curious about the source of most of CMC’s funds since, at first blush, it looks as if it’s mostly from the college itself.

Well, could you not say that funds from the college itself (or the direct funding of CMC’s research institutes) are of a different nature than funds from grants that usually come with all kinds of restrictions? But that is a debate from another day. And I hope you’ll keep the figures shared by CMC in mind when making statements about the research budgets at various LACs.

If we were to confine this discussion to the context of this thread and specifically the question posed by BClintonK, I am afraid that the research expenses (be it more than a footnote or not) are not directly relevant to the approach of Richard Vedder. If it might play a small role in the determination by Robert Morse and the USNews, I think we would have to agree that such element is not part of the Forbes deck of cards.

I think it is pretty simple to point out to better reasons why Forbes differs from USNews.

How much do you think the student debt rating makes between the two rating systems? Could that explain the wide swings show in an earlier post? That might at least explain NYU’s difference.

The “swings” noted above really aren’t that wide, and the ones that are relatively wide (~15 - 20 positions) are relatively few.

Engineering strength should tend to increase the Payscale salary scores.
State schools with strong engineering (Wisconsin, Maryland, Illinois, Texas), and LACs with engineering programs (the service academies, Bucknell), are among the schools that Forbes bumps up by 10 positions or more over US News.

Two of the 9 top RUs that get bump-downs in the Forbes ranking (Chicago and Emory) don’t have full-fledged undergraduate engineering programs. A 3rd, NYU, does not make the USNWR list of top 50 RUs for undergraduate engineering.

However, 6 other bumped-down RUs do have more-or-less strong engineering programs. I don’t know why Hopkins, WUSTL or GaTech would get hammered (relatively speaking) in an outcomes-focused ranking.

But whenever you change the ranking criteria, you’re going to get a few schools that move up or down more than the others. To me, the take-away message from comparing the US News and Forbes rankings is that they do point to pretty much the same set of top schools, and very often assign them nearly the same positions (+/- ~5 positions). So for the most part, the two different sets of criteria are mutually corroborating. A couple of other rankings (Parchment, stateuniversity) also point to pretty much the same set of top ~20 schools. To get a very different list of top schools, you have to shift to a very different set of criteria (like the finance-focused criteria Money magazine uses, or the social justice criteria Washington Monthly uses). Even then there is a lot of overlap at the top. Rich, well-endowed colleges have more resources to buy the best of everything, so they tend to show up high in many rankings.