USNWR ranking methodology-- the nuts & bolts... Or is it just nuts?

Finally, if you consider possible alternatives (like more outcome-based measures - how do the graduates fare years later), you have other problems. The data may be harder to collect, and/or more subject to errors. You still have fuzzy decisions to make.

High salaries can be nice, but are not the end-all, be-all for many who go to college. And in any case, such measurements can be de-facto a measure of how many engineers you graduate, and how many go on to Wall Street.

Many students aspire to be doctors, and that can be a very noble (and lucrative) career indeed. But while your average GP may have a happy, successful career, he or she may not make a lot of lists based on outsized success in business, politics, or the entertainment field. If you somehow weight doctor outcomes highly on their own, then you favor colleges with a lot of pre-meds.

===

Bottom line, there really isn’t a perfect measurement that can be implemented in a practical way by a publication such as USN&WR. But just because the measurements we DO have (USN and others) are imperfect, doesn’t mean they’re useless. They conform, roughly, to what other measures would suggest to us. Harvard may not be #1 in any given year, but it’s probably in the top 5. Rutgers may be higher or lower in a given ranking, but it’s unlikely to be above, or near, Harvard.

My beef with these criteria is that some of them make a big difference in some college tiers, and essentially no difference in other tiers. Thus, for example, graduation and retention rates might be an important criterion when evaluating mid-level state universities, but they are pretty useless in evaluating highly selective schools. If Princeton beats out Harvard because it has a slightly higher 4-year graduation rate, that’s stupid.

There’s lots of different ranking systems out there that measure different things. Pick one that lines up with what’s important to you.

USNWR primarily measures “Yalie-ness.” Admissions selectivity, prestigiosity and financial resources. Since it measures Yalie-ness, Yale tends to rank high.

Colleges That Change Lives measures different things – Yale doesn’t make the list. Yale also doesn’t make the Playboy Party School list.

Things change a lot, though, with the international reputation and research/awards-based rankings – not for HYPSM, but for many other schools, including Ivies like Brown and Dartmouth… and for many top public schools.

Wisconsin is like 17th in the world in some big ranking that recently came out. Yet they are 41st in the US according to US News.

It depends on what you want to measure and how much weight you give to it.

What makes HYPSM (and Columbia, Chicago, maybe Penn) special is that no matter what the ranking or metrics used, they are always near the top. Even Duke and Northwestern suffer (relatively speaking… way down in the 20s or 30s, lol) somewhat in some world-based or research-based rankings.

Maybe, but how much difference does it really make if Princeton beats out Harvard on the list in a given year? Yeah, Princeton and Harvard alums, and those on the campuses (as faculty or students or whatever), may cheer or groan, but I think most of the rest of the world has the two already sitting neck and neck, with perhaps Harvard at the pinnacle, but Princeton a bit smaller, quirkier, and/or shining in its own ways.

I doubt the Wall Street recruiters or grad school admissions folks will say, “Well gosh, we WERE gonna pick Harry Harvard, but did you see the latest USN&WR list? I think we’ll go with Polly Princeton instead.”

The main value of the list for me, as a dad to 3 kids who are approaching the process, but still a bit away (oldest is in 10th grade) is to get a general sense of where schools sit, particularly schools a bit further down the list. Until I started looking at these lists again a year or two ago (many years after looking in my own educational timeframe), I didn’t quite realize how strong certain schools were (I think, Vanderbilt, Rice, Emory), and my rough ranking of various State U.s was also relatively far out of sync with what USN&WR says.

Your comment suggests you buy into the notion that everyone who gets into Harvard and Yale must be brilliant, and that therefore they probably should all just cruise through to graduate. The reality is that, for example, 25% of students at Harvard get below 700 on their SAT Critical Reading. That’s kind of meh…

Moreover, even bright, ambitious, hard working kids (as measure over their ages 15-17 or so), sometimes take a turn for the worse from ages 18-22.

Of course, as parents, we generally WANT them to keep it together (and want them at universities that will be helpful in that regard), but still…

There recently was a list of bar passing rates for California law schools, which I think, is a valid measure. After all, if your law school doesn’t prepare you to pass the bar, you cannot work as a lawyer.

But most degrees don’t have that sort of bright line test to measure the value of the education.

@MWDadOf3, depending on what you are looking for, USNews may be pretty poor at ranking publics.

The key thing your kids and you have to decide is what is important to them.

Also, any rankings are fine, but I prefer criteria that can’t be gamed. For that, reputational rankings and outcome-based rankings are better. Note that USNews data is self-reported, and different schools may report differently how much they spend on students, for instance.

I imagine the grad school tests – GRE, GMAT, LSAT, MCAT, etc. – might be a decent measure of the value of a particular school’s education: just compare the percentile score of the grad school test to the percentile score of the ACT/SAT to get some idea of relative strengths of the educations they received. Obviously it also depends on the work habits of the students, but… not sure how we could control for that…

“Your comment suggests you buy into the notion that everyone who gets into Harvard and Yale must be brilliant, and that therefore they probably should all just cruise through to graduate. The reality is that, for example, 25% of students at Harvard get below 700 on their SAT Critical Reading. That’s kind of meh…”

700 critical reading is the 95th percentile. Your definition of meh is pretty high.

Graduation rates are highly higly correlated with academic selectivity, which highly correlates with high SES.

So sky high graduation rates mostly tell you about what the kids already have when they arrive on campus. 700 SAT kids tend to be kids who graduate from college on time. Tell you much less about what the schools do or don’t do with those kids after they arrive.

While the USNWR rankings themselves are pretty useless, I think it’s clear from this thread that there is very much a value in discussing them as a concept - particularly discussing the methodology and the various influences things have on undergraduate education. I think there’s a lot we all can learn from people who are in different fields and hopefully people can use these rankings more appropriately in the future.

For example, comments I’ve seen made that I disagree with based on my experience as an MD/PhD student:

Value of publishing record:
for a grad student, the publishing record of faculty is CRITICAL. But for an undergraduate, I don’t think it’s that important. Undergraduates don’t need to be generating publishable data and frankly are generally pretty bad at it. As an undergrad, I would get more out of a professor who was willing to take the time to teach me how to perform research than a publish or perish mentality unwilling to allow himself, post docs, or grad students to detract from their work by educating an undergraduate.

MCAT scores:
The MCAT is largely a critical reasoning test that doesn’t require a knowledge base beyond AP/intro college level. I would gather the best way to increase MCAT scores would be to offer classes specifically geared towards it (like Kaplan) which isn’t really the purpose of a university or to just be stricter with SAT scores.

@PurpleTitan I suspect that reputational rankings, both stand-alone (I assume there are some), and, to the extent that they’re incorporated into stuff like the USN rankings, are largely a measure of two things:

  1. USN and similar rankings (thus, reputational rankings/sub-scores are sort of second order reflections of the previous few years of general rankings).

  2. Research oriented rankings - citation counts and the like. A high citation count (properly adjusted for factors like institution size) is probably a reasonable measure of the research quality and output of an institution, but THAT, in turn, is probably only a weak proxy for the quality of education delivered to the average undergraduate at that institution.

The ideal educational assessment would fully and accurately measure outputs in relation to inputs. i.e. How much education does the institution deliver to the students, on top of (or adjusted for) what they came in with, plus their student body’s IQ and so on.

It could be that the best college in the country, on such a measure, is some small, obscure LAC in the rural Midwest that accepts students at the 30th percentile of college applicants, but, over 4 years, delivers graduates at the 70th or 80th percentile of students graduating nationwide.

But it would be VERY difficult to try to measure this in real life or implement a ranking system based on this.

And of course, there are other reasons why we use ranking systems - they’re stamps of approval that the graduates of college X are among the brightest of the bright, and that if you recruit from college X, you’ll be picking from this elite pool. For that kind of measurement, value added is less important than average student academic quality upon graduation, which, in turn, is largely a function of average student academic quality upon matriculation.

This might be a good opportunity to yet again tout my own highly scientific reputational rankings: http://talk.collegeconfidential.com/college-search-selection/978040-ranking-colleges-by-prestigiosity-p1.html

@iwannabe_Brown

Good call on the MCAT. Having never taken it – nor having spoken to someone about it who has – I was unaware that it wasn’t based on things like Biology, Anatomy, and Chemistry.

@MWDadOf3, reputational rankings (for instance, the one I posted earlier by hiring managers: http://finance.yahoo.com/news/hiring-managers-25-best-schools-161911937.html)
mostly but do not completely correlate with the USNews rankings. They correlate better with the alumni achievements tiers that I also posted earlier.

USNWR rankings are just another datapoint. Getting worked up about the scoring methodology doesn’t seem like a constructive use of time. When you compare graduation rates at a larger public university (that may be intentionally more inclusive like ASU) and a highly selective private university I don’t think the right conclusion is that the college with the higher graduation rate is necessarily “better” because the student body in each case is going to be very different. Also, when you look at mid-career Payscale compensation figures normalizing for college major or higher cost areas of the country might be helpful. Hopefully, students aren’t picking the colleges they apply to solely based on their USNWR ranking and alumni aren’t donating more or less as their alma-mater’s ranking goes up or down.

Years ago the Harvard admissions dean said anyone with a 600 or 650 CR was perfectly capable of doing Harvard level work. What I’d like to see is programs that make sure the diamonds in the rough they accept get the support they need so they can graduate. (I remember the story of Cedric Jennings related in *A hope in the Unseen. * If I remember correctly he had SAT scores in the 400s and really did not get the support he should have, but he did manage to graduate despite being so poorly prepared.) You’ll never get 100% - there will always be some students who fall ill, or just realize Harvard isn’t right for them after all. And that’s okay.

I like the idea of measuring outcomes, but it’s easier said than done. My music major friend is an Episcopalean priest, my econ major friend is a doctor, my applied math major friends all ended up in computer industry, except one who went to law school, my Visual and Environmental Major buddies are doing lots of different things, making films, designing fabric, becoming architects, or graphic designers. Another classmate became governor of Massachusetts. A bunch went on to get PhDs and are professors - some at well known colleges and others in less well known ones. I actually didn’t know anyone who ended up on Wall Street, but plenty of people went to Harvard to do just that even back then. So were we successful or not?

My GRE scores were almost exactly the same as my SAT scores. The verbal was a llitle higher which I attribute to doing the NYTSunday Times crossword puzzle with friends. I only took one English class.

Hunt #34

My plebian college is so lacking in prestigiosity, no one has ever even bothered to calculate what its pathetic mH grade is.

For you hopelssly unawares, mH equals one milliHarvard. So one Harvard scores at 1,000 mH.