<p>Several thoughts: 1) these rankings are kind of silly in that they conflate comparisons between schools which are inherently incomparable due to size, funding, mission…; does it really make sense to compare tiny schools like Brown and Northwestern to much larger schools like Cornell, Berkeley and Michigan?; 2) there is/was a degree of spurious precision in that institutions which were separated only in the n-th decimal place were considered different when they are essentially at the same ranking level; 3) UM’s ranking in the mid-80s was in/near the top ten in, roughly, 1985, and dropped 15+ places the next year due to a change in methodology which I’ve never seen adequately explained; 4) if you look at the rankings, they now contain ties (removing some of the aforementioned spurious precision), but ties do not appear to cascade, so we see MIT ranked at 7 (after: 1,2,3,4,4,4 rankings for the “better” schools); in my cosmology, the other schools would be as ranked (i.e., roughly comparable), and MIT would be ranked 5th.</p>
<p>Cascading ties, I think, is quite reasonable in that if the distinctions above a given ranking are not determinable, the whole cohort should take the same ranking and the next school should pick up the next integer. Without that process, Michigan’s ranking is 29th. Were the suggested process used, given the roughly (I didn’t count exactly) 10 ties in schools further up the list, Michigan would rank #19. Not as high as we might like, but perhaps a bit closer to reality. That is the case for Michigan but I wonder how large the impact is for schools which are further down the list (let’s say below #150)? What does such an ordinal ranking tell us about departments or things like citation strength? UM’s department are typically ranked in the top 10 nationally, but for those departments out of the top 10, they frequently rank around #7 or so after ties are removed (but around 13th before tie removal). What is the impact of not cascading ties at that level of granularity on the overall ranking of each school, not just UM?</p>
<p>I don’t have the figures in hand, but I see UVA typically ranked above UM, yet I’m fairly certain that UM has a far larger research budget, greater citation strength and more faculty awards, a larger library, and students who are just as well qualified and who succeed to the same degree as UVA graduates. Many of these measures can’t be meaningfully “normalized” by size. For example, UM has a larger library, but is it meaningful to divide by student headcount? So what, precisely, is embedded in the methodology to lead to the disparity in rankings?</p>
<p>So, rankings make for controversy, and are fun to debate, but it is not clear to me how one can meaningfully use such a synoptic measure. I think the proliferation of alternative rankings is a clear play for economic advantage by the publlishing entities, but would argue that measures of outputs (as used in alternative rankings) can be used rather than purely inputs. Further, that measures such as net ROI and social utility can/should play a role in the creation of such new rankings. Beyond the fun factor, my take is that this dominant ranking service does more harm than good and lends credence to a theory that there is some sort of magical determinism due to those who follow the rankings.</p>