Graduate school physics ranking of a few schools

<p>what is the USNR grad. dept ranking in physics for the following schools? I misplaced my USNR for 2011 that I had just bought recently. thanks.</p>

<p>u of chicago</p>

<p>uw - madison</p>

<p>tufts</p>

<p>carnegie mellon</p>

<p>^ You can get that for free online at the US News website. </p>

<p>But in the event anyone else is interested, they rank as follows:</p>

<p>Chicago #7 (tied with Cornell)
Wisconsin #17 (tied with Penn)
Carnegie Mellon #30 (tied with Brown, Duke)
Tufts #77 (tied with Boston College, Colorado School of Mines, LSU, Oregon State, U Delaware, Georgie, and Washington State)</p>

<p>You didn’t ask, but here are their NRC rankings (R 5th-95th%, S 5th-95th%)</p>

<p>Chicago 4-14, 4-22
Wisconsin 13-32, 21-69
Tufts 73-130, 56-124
CMU 27-54, 10-47</p>

<p>thanks , all. on the NRC rankings, what does R and S mean? Also, what do the nn-nnn mean?</p>

<p>RANKINGS METHODOLOGY IN BRIEF </p>

<p>• Rankings are given in ranges to reflect the inherent uncertainty associated with establishing ordered quality rankings of graduate programs.<br>
• The study committee (see roster below) identified characteristics that, when appropriately weighted for their relative importance in contributing to a high-quality program, would serve as a basis for ranking programs.<br>
• The study offers ranges of rankings for overall program quality that derive from two methods: survey-based (S Rankings) and regression-based (R Rankings).<br>
• S Rankings (for survey-based rankings) are based on how faculty weighted—or assigned importance to—20 characteristics that the study committee determined to be factors contributing to program quality. The weights of characteristics vary by field based on faculty survey responses in each of those fields. Programs in a field rank higher if they demonstrate strength in the characteristics carrying greater weights.<br>
• R Rankings (for regression-based rankings) depend on the weights calculated from faculty ratings of a sample of programs in their field. These ratings were related, through a multiple regression and principal components analysis, to the 20 characteristics that the committee had determined to be factors of program quality. The resulting weights were then applied to data corresponding to those characteristics for each of the programs in the field.<br>
• Programs are also ranked on three “dimensional measures” of program quality—on faculty research activity, on student support and outcomes, and on faculty and student diversity. These rankings are based on specific subsets of characteristics relating to each of the dimensional measures, with the weights of the characteristics normalized (i.e., re-calculated to add to one).<br>
• For every program variable, two random values are generated—one for the data value and one for the weight. The product of these summed across the 20 variables is then used to calculate a rating, which is compared with other program ratings to get a ranking.The uncertainty in program rankings is quantified, in part, by calculating the S Ranking and R Ranking, respectively, of a given program 500 times, each time with a different and randomly selected half-sample of respondents. The resulting 500 rankings are numerically ordered and the lowest and highest five percent are excluded. The 5th and 95th percentile rankings in the ordered list of 500 define the range of rankings shown in the table.</p>

<p>^ These can be converted to ordinal rankings if you have the patience for it; I don’t. A school with rankings like Chicago’s (4-14, 4-22) is probably somewhere around the 5th highest-ranked program in that field, give or take a couple; you just need to compare those scores to other highly ranked programs in the field.</p>