USNWR ranking methodology-- the nuts & bolts... Or is it just nuts?

The 2016 rankings are out. Rather than focus on the schools, let’s discuss methodology.

Definitions of the ranking criteria are on the USNWR website
http://www.usnews.com/education/best-colleges/articles/ranking-criteria-and-weights

Here are the criteria & their weights:

            15.0%    Peer assessment survey              
      7.5%      HS counselors' ratings    

** 22.5% SUBTOTAL: Undergraduate reputation **

            1.3%      Acceptance rate              
            3.1%      HS class standing in top 10%       
            0.0%      HS class standing in top 25%       
      8.1%      CR+M of SAT; composite ACT     

** 12.5% SUBTOTAL: Student selectivity **

            7.0%      Faculty compensation   
            3.0%      Percent faculty with terminal degree 
            1.0%      Percent faculty that is full time              
            1.0%      Student-faculty ratio     
            6.0%      Class size, 1-19 students              
      2.0%      Class size, 50+ students 

** 20.0% SUBTOTAL: Faculty resources **

            18.0%    Average graduation rate              
            4.5%      Average 1st-yr student retention rate   
      7.5%      Graduation rate performance    

** 30.0% SUBTOTAL: Graduation and retention rates **

            10.0%    Financial resources per student
      5.0%      Average alumni giving rate          

** 15.0% SUBTOTAL: Financial resources **

Some observations to reflect upon:

Homecoming Queen
Nearly a quarter of the ranking is based on “Undergraduate academic reputation”, i.e. a popularity opinion poll among peer colleges & HS counselors (akin to voting for homecoming queen)

Graduation rate double-dipping?
Nearly a third of the ranking is based on graduation/retention performance, with a strange double-dipping of “average graduation rate” and “graduation rate performance”. “Graduation rate performance” is defined as the school’s actual graduation rate compared to what USNWR’s magic crystal ball thinks it ought to be.

You get what you pay for
Faculty compensation weighs in at 7% of the criteria weighting.

Kneel to the SAT God
Schools that have ambitions to climb the rankings know what really counts: SAT/ACT scores trump all with 8% weighting!!! The SAT Writing section is totally worthless. There is 3% weighting for being in the top10% of your class, but since many schools don’t report rank, this is a bogus metric. Being in the top25% of the class counts for big fat zero.

A rising tide lifts all yachts
15% of the ranking is based on how rich and how much getting richer a school is. If a school doesn’t have an 11-figure endowment, it’ll probably never break the top 5.

What counts for little or nothing in the USNWR ranking:

  • Acceptance rate: 1.3%
  • % full time faculty: 1% (this metric, combined w a 7% weighting for faculty compensation, tells me schools are rewarded for having high-priced, celebrity, cameo lecturers)
  • Student-faculty ratio: 1%
  • Diversity: 0%

Your thoughts on the methodology? What can & should schools do to play the game?

Maybe after a morning cup of coffee, the US News rankings metrics will be something worth thinking about for a few milliseconds. And then again, maybe not. Your review is interesting, but giving any time to the US news rating system just feeds the dragon, IMO.

Also worth noting that faculty compensation is not adjusted for the cost of living in a particular area.

Hey, I’m just pointing out that Smaug is missing some scales…

It makes me nuts when some posters think there’s a hard distinction between a school that’s ranked #20 vs a school that’s ranked #21. 8-|

Seems not too bad. For example, of course student quality should be weighted more than acceptance rate.

Smaug should be slayed.

You get points for having classes of 1-19, that makes sense, small classes are better. You get 0 points for classes 20-49, but then you get points for having classes greater than 50?

So basically a school that doesn’t have the ability to go smaller than 20, but does manage to keep all its classes in the 20-49 range gets a 0/8, but a school that literally only has 300 person lectures gets a 2/8?

EDIT: just realized, or is that 2% a negative score? I.e. you get 2 points if you have 0 classes greater than 50 and the more classes you have the closer you get to 0 points?

I’d give more weight to admission rate and yield. Ultimately, selectivity is the metric that matters the most.

It’s not a 2% score; it’s a 2% weighting.

Odd, that the reason most people go to college in the first place rates not at all. That is:

(1) How many get jobs in their field after graduation? How do employers or graduate schools rank the college? Salaries after 2 years, 10 years, 20 years, etc?

or / and

(2) How do current and former students rate their college experience?

The argument against that is that it encourages bringing in better students but doesn’t reward actually providing anything for them.

Quote How do current and former students rate their college experience?

[/quote]

USNWR explains that alumni giving is a proxy for customer satisfaction.

@TatinG, here’s a ranking that’s based on hiring managers and SAT scores:
http://finance.yahoo.com/news/hiring-managers-25-best-schools-161911937.html

You can look at Payscale for salary rankings (though they don’t adjust for field).

Here’s my tiering by alumni achievements: http://talk.collegeconfidential.com/college-search-selection/1682986-ivy-equivalents-p3.html

UNWR and the other ratings are solely created to sell magazines, or in today’s world, get more click’s. The vast majority of people would be better off ignoring them.

Graduation-rate statistics should have a threshold value (perhaps 92%) beyond which a higher value does not increase the ranking. Often I see rigorous tech schools penalized because they don’t automatically funnel students to graduation, e.g., 6-year graduation rate for:

MIT – 91%
Caltech – 92%

A school with 100% graduation is a school that isn’t appropriately weeding out weak students.

Maybe it’s because I’m faculty, but I don’t quite see the same effect here. That said, it seriously disturbs me that faculty compensation rates more than twice as heavily as faculty having a terminal degree, and seven times as much as not relying on adjuncts to staff your classrooms. If they’re actually concerned about faculty quality, degree should rank above salaries (adjusted for cost of living, as suggested upthread), at the very least.

I’d like these added:

  • Research/internship opportunities
  • Faculty awards & publications

These speak to the quality of education and opportunities for students.

That they attribute 7% of the entire ranking to faculty salary is unfathomable. Among other questions, are they taking standard of living into account? $1 in South Dakota buys more than $1 in Boston.

I’d remove that metric entirely, trim down a couple others, and make both of my suggestions worth 5% each.

Finally, this might not be popular, but I think research spending influences the academic energy at a school. There is also often trickle-down from grad to undergrad, once students get into their majors. Plus, it is one of the main bases for academic prestige; here at least they could quantify it somewhat, instead of their unreliable polling of college deans and guidance counselors; reduce both of those and give 5% to the Research Spending metric.

One can pick nits with pretty much every one of these criteria:

Reputation: How much do HS counselors and peers know about the fine differences between Chicago and Northwestern? (Especially if they’re not from the Chicago area. Extend this metaphor to other universities across the nation).

Acceptance rate: Just encourages overmarketing to unqualified kids and yield management

HS Rank: So many HSs have less than full disclosure of this stuff. Plus, top 10% from various competitive HSs is almost certainly a far greater achievement (academically), than top 10% from various lesser (rural, inner city, etc) HSs.

SAT/ACT: Easy to see why these are criticized - impact of test prep, etc…

Faculty compensation: 7% is a big # here. Are highly compensated faculty being paid more for research than teaching? Does USN&WR adjust fairly for cost of living differences between New York City and, say, New Hampshire or Austin?

% with Terminal degree, full time faculty: Perhaps not a big differentiator among elite Us, but still… Is a PHD tenure track professor necessarily better than an adjunct without a PHD? One of my better classes was a marketing class taught by (I think) a retired business exec who I assume was part time and lacked a PHD. One of my worst classes was a management class taught by (I think) a tenure track professor with a PHD, but very little real world management experience.

Student faculty ratio: Actually, this one’s pretty reasonable, IMO, but perhaps gameable by various classification systems.

Class size 1-19, 50+: Again, reasonable, but I assume that the heavy weight here is going to lead to (or has already led to) plenty of classes with a hard cap at 19 and 49 students. Ideally, you’d want a more fine grained measurement here…

Graduation rate & performance: HUGE weight, at (18%+7.5%=25.5%). It’s not a terrible measure, but as another poster mentions, it may penalize schools that emphasize difficult technical degrees (MIT, CalTech). The heavy weight here may encourage schools to push some kids along who might be better off reconsidering their life/educational paths after a semester or two (or, may discourage universities from admitting risky cases in the first place).

First year retention rate: Similar issues to graduation rate.

Financial resources per student: Hopefully, this measures SPENDING, not endowment size (a big endowment doesn’t do much for students if it just piles up endlessly without being spent). Similar possible issues to the faculty compensation issues. Also, you don’t really NEED massive financial resources to teach history and english majors, relative to what you might need (or want anyways) for certain sciences.

Alumni giving rate: Not bad, but my guess is that this probably favors private unis over public (I think many folks have a greater reluctance to support institutions that are already receiving significant government support).

All that said, you could also pick on the various measures that colleges, in turn, use to select their admits (GPA, test scores, extra curriculars, various tip factors, etc).

Choosing (or ranking) contenders where inputs and/or outputs are somewhat fuzzy can be like trying to nail Jell-O to a tree.

“A school with 100% graduation is a school that isn’t appropriately weeding out weak students.”

Totally.

Look at Harvard (98%) and Yale (97%).

Shameless diploma mills…