USNWR Rankings - The Metrics

<p>Many of the USNWR criteria are mutually reinforcing (redundant). Together they tend to emphasize exclusivity, wealth, prestige and reputation. Not that these have no relationship to quality. But to really shake up the results you have to go to a very different set of criteria as in Forbes, Kiplinger, Washington Monthly, or the various rankings based on publication and citation volume. These rankings tend to emphasize outcomes and value. I like Washington Monthly because their site allows you to click-sort on different columns to generate rankings that better reflect your own priorities.</p>

<p>But CC’ers will howl when they see a ranking that places the likes of Centre College in the top 20. They will dismiss an excellent in-state public university in favor of a private school costing more than 2x as much, with two magic words (“iBank recruiting”) and no numbers to demonstrate the superior value of the costlier school.</p>

<p>Bc,
I’m not sure how appropriate it is to extrapolate the admissions practices of law schools to judge what goes on with undergraduate admissions. My sense is that there are some significant differences (or at least I hope so!). </p>

<p>In addition, IMO comparing the practices of what goes on with an entering class of 214 (such as at Yale Law) and an entering class of 1510 (such as at Wash U undergrad) is also ill-advised. Even if the practices you allege were ongoing in undergraduate admissions, the larger size almost certainly means a significant diminution of the scoring impact. </p>

<p>Alex,
Re your statement in # 35 about the inability to measure the quality of classroom teaching across a university, I’m not sure that I can agree. There are some publicly-available sources that evaluate the quality of teaching. These sources consider more than 300 opinions of those with direct exposure, ie, the students at an individual institution. They are the ones paying the bills and have a vested interest in getting value for their dollars. They care about the product delivered in the classroom and their comments can often be enormously insightful.</p>

<p>By contrast, the PA scoring is done by academic administrators dispersed across the entirety of the USA. Their familiarity is frequently very low with regard to institutions that they “grade” and their criteria in assigning their “grades” are non-standard and arbitrary. Also, as we have seen in multiple public disclosures of the ballots, their “grades” can be a total fraud.</p>

<p>Student reviews can be useful. But one problem there is normalizing the ratings of students across very different institutions. Another is ensuring you get a representative response. Read some of the comments on St<em>d</em>nt R<em>vi</em>ws.com. You get clusters of extreme negative comments from a few disgruntled students interspersed with unqualified praise. </p>

<p>Peer Assessments of graduate departments should be more reliable than the USNWR undergraduate PAs, in my opinion. The problem with the undergraduate PAs is exactly as hawkette describes. The graduate PAs (US News, NRC) at least are provided - I assume - by professors in the same fields they are rating. Presumably they are familiar with the work of counterparts at other schools. How in the world an undergraduate President, Dean of Admissions, or Provost can make a valid assessment of scores of other colleges is beyond me.</p>

<p>If one were to take out the PA, reweight the other factors to be out of 100% and recalculate, what does that list look like compared to current rankings?</p>

<p>This has been done before here on CC. </p>

<p><a href=“http://talk.collegeconfidential.com/college-search-selection/851132-peer-reputation-skews-rankings-ok-here-usnwr-rankings-w-peer-assessment-removed.html[/url]”>http://talk.collegeconfidential.com/college-search-selection/851132-peer-reputation-skews-rankings-ok-here-usnwr-rankings-w-peer-assessment-removed.html&lt;/a&gt;&lt;/p&gt;

<p>

Hawkette, of course in all surveys there will be a few questionable ballot submittals. But, it is an opinion survey and there is no right or wrong answer. </p>

<p>However, I believe the aggregate survey results of ~2,000 submissions accurately reflect what is being asked: what colleges offer the most distinguished academic programs?</p>

<p>Besides, the USNWR deducts the handful of outliers from the final average.</p>

<p>Pizzagirl-
re post #64
You could take out the PA and re-weight the other factors and the rankings would stay almost exactly the same. It would depend on the weights.</p>

<p>Thanks, rjfofnovi. So, essentially, nothing fundamentally changes.</p>

<p>
[quote[ The PA scores from a single individual might be askew but the PA scores reported by US News represent the average collective professional judgements of many individuals.
PA scores seem to capture just about everything that is important about a college.
[/quote]
</p>

<p>ROFL. Yes, I’m supposed to believe that people have objective means of assessing a multitude of other colleges across the country. Like the head of Bryn Mawr knows what’s going on at UC Davis or Drake or Butler or SMU and vice versa. What naivete.</p>

<p>“Yes, I’m supposed to believe that people have objective means of assessing a multitude of other colleges across the country. Like the head of Bryn Mawr knows what’s going on at UC Davis or Drake or Butler or SMU and vice versa. What naivete.”</p>

<p>Only the head of Bryn Mawr is not asked to rate UC Davis. She would be asked to rate schools that are peers to her own. Schools like Smith, Mount Holyoke, Wellesley, Barnard, Haverford, Swarthmore, Lafayette, Bucknell, Franklin and Marshall, Gettysburg College, Dickinson College, Connecticut College, Wesleyan, Colgate etc… The survey clearly instrcuts her to leave out any college she is not familiar with and evaluate only colleges that she has been exposed to.</p>

<p>^ What does “familiar with” mean? What does “exposed to” mean? I bet what half of them do is look up the other schools in last year’s US News ranking.</p>

<p>

</p>

<p>OK, so our Bryn Mawr person is going to be aware of what’s going on at Smith, Mt H, W, Barnard, Haverford and Swarthmore. I’ll buy that. Four of them are her Seven Sisters buddies and the other two are part of her Bi-Co / Tri-Co. Fine. </p>

<p>How is she possibly going to know what’s going on at these other places? Come off it. I don’t buy it for a minute that there is sufficient REAL knowledge to make a judgment. There will be perceptions and relationships (“Hey, I know so-and-so at Colgate, I’m sure he runs a good program”). But true knowledge? Nope.</p>

<p>pizzagirl, the “other” places that you are referring to are: “Lafayette, Bucknell, Franklin and Marshall, Gettysburg College, Dickinson College, Connecticut College, Wesleyan, Colgate”</p>

<p>So are you saying that the President of Bryn Mawr, together with the faculty of Bryn Mawr, many of whom received degrees from some of these colleges and are in constant contact with collegues from these other colleges, would not know much about these colleges?</p>

<p>So tell us, who else would know much about these other colleges? - enough to be able to give some opinion and ranking as to quality to the USNWR?</p>

<p>

</p>

<p>do y’all really think that academics don’t move around? Speaking of the Prez of BM, she also served as Deans at Emory and Georgetown…the academic dean at BM obtained her PhD from UVa…</p>

<p>what is “BW”?</p>

<p>Yes, of course academics move around. Which means they’ll have their impressions formed by when they were someplace which may or may not accurately reflect today.</p>

<p>Probably no one person, johnadams. It’s possible someone might know about their colleagues in the same department, but I don’t buy that these people are accurately and fairly able to evaluate all these other schools. So the brand name serves as the signifiers which is self fulfilling. </p>

<p>You have “opinions” on most top 20 schools, right? Have you attended them? Spent any significant time on their campuses beyond college visits and maybe visiting friends? I bet your opinions on these other colleges is based mostly on general impressions, not objective knowledge.</p>

<p>Pizzagirl, then are you suggesting that the Peer Assessment 25% part of the USNWR rankings be completely eliminated?</p>

<p>have you reviewed how little effect the PA has on overall rankings?</p>

<p>this, of course, is because of the “tightness” in the Peer Assessment rankings. The following are the points (out of 5.0) separating every ten postions on the PA rankings:</p>

<h1>1 compared to #10 = 0.4 (out of 5.0)</h1>

<h1>10 to #20 = 0.3</h1>

<h1>20 to #30 = 0.3</h1>

<h1>30 to #40 = 0.3</h1>

<h1>40 to #50 = 0.1</h1>

<h1>50 to #60 = 0.1</h1>

<h1>60 to #70 = 0.3</h1>

<p>so, for instance #40 is separated by #60 by only 0.2 points - that is 20 spots on the PA rankings separated by only 0.2 points out of 5.0 points. That is 4.0% and since the PA represents 25% of the overall USNWR rankings, these 20 spots of PA are separated by only 1.0% in the overall rankings due to PA.</p>

<p>In essense, the “tightness” of these PA rankings are almost neutralizing their effect on the overall USNWR rankings.</p>

<p>here are the latest PA rankings used for the table above:</p>

<p>4.9 , Harvard
4.9 , Princeton
4.9 , MIT
4.9 , Stanford
4.8 , Yale
4.7 , UC BERKELEY
4.6 , Caltech
4.6 , Columbia
4.6 , U Chicago
4.5 , U Penn
4.5 , Johns Hopkins
4.5 , Cornell
4.4 , Duke
4.4 , Brown
4.4 , U MICHIGAN
4.3 , Dartmouth
4.3 , Northwestern
4.3 , U VIRGINIA
4.2 , Carnegie Mellon
4.2 , UCLA
4.1 , U N CAROLINA
4.1 , Wash U
4.1 , U WISCONSIN
4.0 , Emory
4.0 , GEORGIA TECH
4.0 , Rice
4.0 , Vanderbilt
4.0 , Georgetown
4.0 , U ILLINOIS
3.9 , USC
3.9 , U TEXAS
3.8 , Notre Dame
3.8 , NYU
3.8 , WILLIAM & MARY
3.8 , UC SAN DIEGO
3.8 , UC DAVIS
3.8 , U WASHINGTON
3.8 , PENN STATE
3.7 , PURDUE
3.6 , Tufts
3.6 , UC IRVINE
3.6 , U FLORIDA
3.6 , OHIO STATE
3.6 , U MARYLAND
3.6 , U MINNESOTA
3.6 , INDIANA U
3.5 , Wake Forest
3.5 , Brandeis
3.5 , Boston College
3.5 , Case Western
3.5 , Rensselaer
3.5 , UC S BARBARA
3.5 , TEXAS A&M
3.5 , U IOWA
3.4 , U Rochester
3.4 , George Washington
3.4 , Boston University
3.4 , U PITTSBURGH
3.4 , U GEORGIA
3.4 , MICHIGAN ST
3.3 , Tulane
3.3 , Syracuse
3.3 , RUTGERS
3.3 , VIRGINIA TECH
3.2 , Lehigh
3.2 , U CONNECTICUT
3.1 , U Miami
3.1 , Pepperdine
3.1 , Fordham
3.1 , CLEMSON
3.1 , U DELAWARE
3.1 , UC S CRUZ
2.9 , SMU
2.9 , BYU
2.8 , Yeshiva
2.7 , Worcester</p>

<p>It’s fair to assume the assessors know something about the colleges they assess. What kind of knowledge do they have? I don’t believe that the average PA responder is likely to have knowledge about other colleges, department by department, similar to what an economics professor at Stanford might have about graduate-level economics research at Princeton. Do the presidents, provosts, and admissions deans who submit peer assessments have deep knowledge about how peer schools are administered? Maybe. About the quality of a typical required freshman course? I doubt it (not beyond a couple of schools.)</p>

<p>If the PA scores are predictable from objective measurements, what value do they add? Their biggest impact seems to be that they raise the rankings of a few public universities. Berkeley with a PA score of 4.7 (6th) for example. If Berkeley’s high PA score is more nearly the truth than what all the objective indicators predict (a 20-something ranking) then USNWR should improve the model; then the PA would be unnecessary. Unless you believe the PAs always reflects some deep truth that a data-driven model cannot possibly capture.</p>

<p>JohnAdams - You may be right that this “tightness” means that the PA have little impact, but one really cannot say that just because the scale is only a 5 point scale and therefore the numbers tend to vary by a few tenths in many cases. The real test of your assertion is to perform a sensitivity analysis. For example, let’s assume we have some omnipotent absolute knowledge and that for whatever reason, Wake Forest is being rather underrated at 3.5, and that is really should be 3.7. How much does that change its ranking? Do small changes makes a fairly big difference? I don’t know the answer, although it probably is rather easy to calculate. I just don’t have time to track down all the particulars. It is rather like in baseball, even late in the season the difference in one or two hits can move a person several slots in the batting average “rankings”, because it is measured to 3 decimal places or even further if need be. There is a fair amount of sensitivity to small changes.</p>

<p>Perhaps you can perform this calculation and see what happens.</p>