US News rankings are out...

Agree that these rankings won’t affect Michigan’s strength among academia circles, but years on years with no improvement is certainly not acceptable for a school with Michigan’s resources.

I agree with your points, and I find it baffling that the school hasn’t done more to improve its USNWR ranking. God knows there are enough administrators to do it.

According to the AAUP 2016-17 faculty compensation survey, the average full professor at UCLA makes $195,000 while the average full professor at Michigan makes $168,200. If US News doesn’t adjust for cost-of-living, they’re calculating that full professors make about 16% more at UCLA than at Michigan. Actually, they’re probably using the “total compensation” figure which comes out to $259,900 at UCLA versus $204,200 at Michigan—a 27.3% difference. This compounds the cost-of-living problem because it gives schools in high-cost areas an additional bonus for paying more for benefits like health insurance. This would easily be enough to account for UCLA’s higher ranking in “faculty resources” despite Michigan;s doing as well or better in all other components of that ranking.

This is a completely bogus comparison. According to salary.com’s cost-of-living calculator, someone making $168,200 in Ann Arbor would need to make $263,801 in Los angeles to enjoy the same standard of living—a 56.8% difference in the cost of living. In short, relative to cost-of-living, faculty at Michigan are actually compensated much more handsomely than faculty at UCLA. But UCLA gets a big boost in the US News ranking because LA is a very expensive place to live.

bckintonk, the US News ranking is a joke. They are designed purely to serve private universities in the East Coast or in urban areas in other parts of the country. They are the only ones who benefit from its laughably flawed methodology. I have been saying it for over a decade now, in a ranking with a reasonably good methodology, and with stringent data-auditing ensuring consistency and accuracy, Michigan would be ranked in the top 15 for undergraduate education.

@JW1231 I am gratified to see that Michigan isn’t “taking steps” to address its ranking. It has experienced year-on-year growth in application numbers and is having to reject thousands of exceptionally well qualified candidates. This despite its “slipping” ranking. I agree that some candidates focus on these rankings just as some candidates only consider Ivy League universities to be worthy of their application (most of whom end up rejected). I’d say focusing on the rankings is more a commentary on the candidate than on the university.

I prefer this ranking:

http://time.com/money/best-colleges/rankings/best-colleges/

Oddly enough, I like it too TooOld4School! :wink:

Oddly enough, I like it too TooOld4School! :wink:

Another example of US News’ sloppy math:

In calculation Student Selectivity Index

  1. SAT/ACT Test Scores (65%) Michigan: 29-33 (ca 1360-1500) Boston College: 1260-1460 UC-San Diego: 1140-1420 Wake Forest: 1240-1440
  2. Top 10% in Class (25%) UC-San Diego: 100% Boston College: 80% Wake Forest: 78% Michigan: 74% (only 19% reporting)
  3. Acceptance Rate (10%) Michigan: 29% Wake Forest: 30% Boston College: 31% UC-San Diego: 36%

Student Selectivity Rank (according to US News’ math):
Boston College: 28
UC-San Diego: 29
Wake Forest: 33
Michigan: 37

Anyone here understand this new math?

No doubt top % class doesn’t work anymore with so few reporting. But for accuracy, ACT 29-33 is more like new SAT 1300-1460. I’d be surprised if anyone (or maybe more than a few) looked at the USNWR and said that’s it, its one of the other school listed above instead of Michigan.

Actually turtle17 and GoBlue81, there are several sources that convert the ACT to the new SAT. Below are the two extremes I have seen for a 29-33 range:

Worst cast scenario:
1290-1440

Best case scenario:
1380-1510

However, I would say reality is somewhere in between. Probably more like 1340-1480.

My bad. I took my numbers from College Board’s Concordance Table. I suppose they may be biased.
https://collegereadiness.collegeboard.org/pdf/higher-ed-brief-sat-concordance.pdf
Table 15 (p.15), last page.

Still, I’d like to know the formula US New used to come up with their Student Selectivity Rank. Apparently a rather significant difference in SAT/ACT score is not sufficient to offset a 2% difference in Top 10% Class Rank … as in the case of Michigan vs. Wake Forest. Or in the case of Michigan vs. UC-San Diego.

I doubt how accurate are the SAT scores they used as there were not many admission data published with new SAT score and only a small fraction of students admitted for 2016 submitted the new score. Any conversion data are crap including the so called concordance table. If anything, use the percentile charts published recently from the students who actually took the new SAT for coomparison with ACT scores.

I agree with that. I’d still like to return to my point I have a hard time imagining what potential applicant would choose to attend BC, UCSD, or Wake Forest instead of UM because of USNWR. The two privates offer a different experience, and the public is in a different state, and is also different in all sorts of ways that impact the undergrad experience.

@billcsho I agree. At this point, percentile is a better way for comparison. Michigan’s 29-33 ACT converts to 90.62%-98.08% compared to Wake’s 85.91%-97.61%. But how do you ‘quantify’ that for comparison? If you take the average, Michigan at 94.34% trumps Wake at 91.76%. A 2.6% difference in percentile score surely beats a 4% difference in top 10% class rank, given the weight factor of 65% vs. 25%. Yet Wake is 33rd rank compared to Michigan at 37th rank. Go figure…

If we take the average percentile, Michigan at 94.34% is ahead of BC’s 92.95%, Wake’s 91.76%, and UCSD’s 85.72%. I’d really like to know the formula US New’s technicians used to calculate Selectivity rank … or any other ranks.

One of the biggest fallacies is US New’s technicians are trying to use a one-size-fits-all formula for the top 50 universities and the bottom 50 universities. Similarly for privates vs. publics; large universities with a broad range of disciplines vs. small colleges. Factors that are important for the lower tiers universities are not necessarily important for top tiers … like % Professors with the highest degrees.

Basically it means one school pput more weight in test score than the other one. Note that the low end of mid 50 may be skewed by athletes recruit or other factors which are more prominant for a small school.

I agree GoBlue81. That’s is part of the problem. The methodology is very flawed. But data integrity and consistency is another part of the problem. If the US News at least ensured that the data were accurately and consistently reported and input into the formula, you would already have a much different result. But in the end, the methodology itself is very flawed because, as you point out, there is no way that one methodology can be used for a large public university in a small Midwestern or Southern town while at the same time for a tiny/small private urban university. How is it possible to use one methodology to compare Caltech and Penn State, or MIT and Michigan or Rice and Cornell etc…