<p>I took apart a couple of different rankings and looked at what they describe as their methodologies. My conclusion was that they grouped roughly, as in top group, middle group, next group. It wasn’t possible to figure a good error for each piece and thus an overall error for the actual rankings but I found some data used to rank that showed very small differences.</p>
<p>To explain, some of the pieces translate to simple scales, mostly 5 point, which means you get a bunch of schools with the same score on a piece and, if there’s any error at all, you get an even larger bunch that could have the same score. Even if you take the numbers as is, the differences are small. Some of the data I found had a huge number of schools in a small band - which is to be expected because how many 2 colleges are there in the US?</p>
<p>I assume people who know numbers work on this stuff - some obviously, as in a diversity index that converts reported data into a 0 to 1 scale. But I see no discussion of standard deviation or discussion of multiple trials. </p>
<p>People have a need to list things in order. Maybe it’s as basic as in Genesis when God brings the animals to Adam one by one so he can name them. </p>
<p>Ever look at how they weight the metrics? What is the sensitivity in that? Who does it favor? And penalize? What are the rankings if you exclude one metric and then another and then another? These are important things to do with data. </p>
<p>No. They list basic methodologies and weighting and then a list of results. No 10,000 trials. No confidence intervals - which I’d love to see for each metric and overall. </p>
<p>One could easily make the case, as noted above, that number 63 is actually number 38 on a different simulation. (And I’m not picking the outlier, like if you simulate a baseball game enough you’ll get not only a perfect game but all strikes.) </p>
<p>I alway say to find a place you like, which has a program you like and go from there. But kids argue over number 42 versus number 47 and the “prestige” difference.</p>
<p>In regards to prestige, which is the real focus of this thread, there is a prestige advantage but the data says it’s really you, not the school which makes you successful. I’ve posted about this before, but data says that kids who get into “prestige” schools but go to “lesser” schools make as much money. And, interestingly, the large studies of earnings show relatively small differences between a large variety of schools - with the main factor being location; you make more in NJ or CA than in MS or AR (not talking about cost of living). </p>
<p>The story is different at the grad school level. A good study - I think - looked at earnings of academics and found more prestigious grad programs led to better jobs and that advantage lasted for almost 10 years. The differences grew smaller as the prestige gap narrowed, which makes the study less important but still interesting.</p>
<p>As to prestige, it is also local. A degree from the state school generally has more meaning in that state because the alumni are more important. The idea of a “prestige” school is that its prestige extends over borders. That’s true for name recognition - and certainly everyone has heard of Yale - but it doesn’t automatically give you the local prestige of a local alumni network.</p>
<p>(I also wanted to note that some rankings are really weird. As in a program ranking that’s compiled from surveys sent to deans and 2 senior faculty. I still can’t figure out how that is useful except by almost accidently correlation.)</p>