USNews 2011 - New Methodology and Stanford's strange result

<p>“Clinton, are the GC scores really MORE ridiculous than the PA?”</p>

<p>Here is your answer Xiggi:</p>

<p>“Indiana University-Purdue University-Indianapolis (IUPUI) is on the same plane with Tulane and Wisconsin”</p>

<p>That these counselors have no clue obviously of the location of IU or PU, which I must assume they were ranking as one school, tells me all I need to know. IUPUI is not even in the same league at the other forementioned two schools. Once again our nations educators fail in simple geography! There must have been many of these so called “guidance counselors” who got confused on what school they were referring to.</p>

<p>

</p>

<p>I’ve always maintained the PA score at the university level is mainly a reflection of the scholarly reputation of the faculty: which schools have the most impressive rosters of scholars in the most academic disciplines. Among true research universities (let’s say, the R1s) this is not only meaningful but it’s something every university president and provost MUST know about their own school’s peer institutions and everyone above it in the pecking order, as well as everyone who might be gaining on it. I also maintain it’s something valuable for undergraduates to know about a school, because it reflects the strength, depth, and breadth of faculty intellectual resources available to them. It’s not everything, but it is one crucial dimension of a research university’s strength that is reflected nowhere else in the US News rankings.</p>

<p>I’ve always been less clear about how to interpret PA at the LAC level, where as a rule there’s much less emphasis on faculty scholarship (though there certainly are some very strong scholars here and there), and consequently various schools’ scholarly output and reputations might be harder for college officials to track. But assuming PA at the LAC level does reflect more or less the same kinds of factors that go into it at the research university level, I have no problem saying Haverford’s PA score (4.0) is probably in the right ballpark relative to Smith’s (4.3) and some other schools. Smith’s faculty is its greatest strength, and it has a darned good faculty; maybe on balance a little better than Haverford’s, not least because it’s bigger and has greater breadth. Haverford has other strengths that more than compensate, which is why it always outranks Smith in US News. Haverford’s PA score nonetheless puts it in some pretty good company, same as Claremont McKenna and within hailing distance of schools like Wesleyan, Pomona, and Harvey Mudd.</p>

<p>I’m not sure what the GC score represents at this level, however, apart from sheer name recognition. That would explain why you’d get, say, a Scripps scoring at exactly the same level as Pomona and Harvey Mudd (4.5), with CMC and Pitzer only a smidge behind (4.4). GCs who can identify all five Claremont Colleges are likely to regard them all favorably and not make fine distinctions among them, while college presidents and provosts would rate Pomona and Harvey Mudd at the top, CMC a small step back, and Scripps and Pitzer a distinct notch or two lower. I like Scripps and encouraged my D to look at it, but it clearly doesn’t belong in quite the same company as Amherst, Bowdoin, Pomona, Vassar, and Harvey Mudd (all 4.5). I also like Earlham and I encouraged my D to consider it as a safety, but it’s really not the same caliber as Haverford (both rated 4.1 by the GCs); nor, for that matter, are Skidmore, Lawrence University, or St. Olaf, all very good LACs about which I would not speak ill except to say that the GCs either overrated them or underrated Haverford when it placed them all at the same level. </p>

<p>And I have no idea what the GCs were thinking when they rated Judson College in Marion, AL (PA 2.0, admit rate and SAT/ACT scores n/a, average freshman retention rate 54.0%, 6-year grad rate 46.3%) as the equal of Bucknell and Macalester (4.3) and ahead of Carleton, Colgate, Grinnell, Mt. Holyoke, Oberlin, Reed, Rhodes, Sarah Lawrence, Bryn Mawr, and Haverford. Similar anomalies abound. No, this is no improvement to PA, nor is it a necessary corrective. It’s just bizarre, and further undermines the credibility of the US News ranking.</p>

<p>US News top 30 LAC PA scores / GC scores</p>

<p>Williams 4.7 / 4.6
Amherst 4.7 / 4.5
Swarthmore 4.6 / 4.6
Middlebury 4.3 / 4.4
Wellesley 4.5 / 4.6
Bowdoin 4.3 / 4.5
Pomona 4.2 / 4.5
Carleton 4.3 / 4.2
Davidson 4.1 / 4.4
Haverford 4.0 / 4.1
Claremont McKenna 4.0 / 4.4
Vassar 4.2 / 4.5
Wesleyan 4.1 / 4.5
Smith 4.3 / 4.5
Washington & Lee 3.8 / 4.1
US Military Academy 4.1 / 4.6
US Naval Academy 4.1 / 4.8
Grinnell 4.3 / 4.2
Hamilton 3.7 / 4.0
Harvey Mudd 4.1 / 4.5
Bates 4.1 / 4.4
Colgate 4.1 / 4.2
Colby 4.0 / 4.3
Oberlin 4.1 / 4.2
Scripps 3.6 / 4.5
Barnard 3.9 / 4.4
Colorado College 3.7 / 4.1
Macalester 3.9 / 4.3
Mt. Holyoke 4.1 / 4.2
Bryn Mawr 4.1 / 4.1
Bucknell 3.8 / 4.3</p>

<p>Schools getting the biggest boost from the new “reputational” methodology (GC score > PA score): Scripps + 0.9; US Naval Academy + 0.7; Barnard + 0.5; Bucknell + 0.5; US Military Academy + 0.5; Claremont McKenna + 0.4; Colorado College + 0.4; Harvey Mudd + 0.4; Macalester +0.4; Wesleyan + 0.4; Bates + 0.3; Colby + 0.3; Davidson + 0.3; Hamilton + 0.3; Pomona + 0.3; Vassar + 0.3 Washington + Lee + 0.3; Bowdoin + 0.2; Smith + 0.2.</p>

<p>Schools taking the biggest hit: Amherst – 0.2; Carleton – 0.1; Grinnell – 0.1; Williams – 0.1; and all schools whose GC score matched their PA score or went up only slightly, because so many other schools made larger gains and therefore improved their relative position.</p>

<p>

</p>

<p>US News top 31 PA scores/ HS counselor scores</p>

<p>Harvard 4.9 / 4.9
Princeton 4.9 / 4.9
Yale 4.8 / 4.9
Columbia 4.6 / 4.8
Stanford 4.9 / 4.9
Penn 4.5 / 4.6
Caltech 4.6 / 4.6
MIT 4.9 / 4.9
Dartmouth 4.3 / 4.7
Duke 4.4 / 4.7
Chicago 4.6 / 4.5
Northwestern 4.4 / 4.6
Johns Hopkins 4.5 / 4.8
Wash U 4.1 / 4.4
Brown 4.4 / 4.8
Cornell 4.5 / 4.8
Rice 4.1 / 4.4
Vanderbilt 4.1 / 4.5
Notre Dame 3.9 / 4.6
Emory 4.0 / 4.4
Georgetown 4.1 / 4.8
UC Berkeley 4.7 / 4.6
Carnegie Mellon 4.2 / 4.6
USC 4.0 / 4.4
UCLA 4.2 / 4.3
UVA 4.3 / 4.3
Wake Forest 3.5 / 4.3
Tufts 3.6 / 4.5
Michigan 4.4 / 4.4
UNC Chapel Hill 4.1 / 4.4
Boston College 3.6 / 4.4
William & Mary 3.8 / 4.3</p>

<p>Schools getting the biggest boost from new “reputational” methodology (HS counselor score > PA score): Tufts + 0.9; Boston College + 0.8; Wake Forest +0.8; Georgetown + 0.7; Notre Dame + 0.7; William & Mary + 0.5; Brown +0.4; Carnegie Mellon + 0.4; Dartmouth + 0.4; Emory + 0.4; USC + 0.4; Vanderbilt +0.4; Duke + 0.3; Johns Hopkins +0.3; UNC Chapel Hill +0.3; Wash U + 0.3; Columbia +0.2; Northwestern +0.2. </p>

<p>Taking the biggest hit from new methodology: Chicago – 0.1; UC Berkeley – 0.1; and all schools whose HS Counselor score equaled their PA score, because so many schools got much higher scores from the HS Counselors that merely matching last year’s reputational score meant falling behind in a relative sense.</p>

<p>I guess the GCs really like Boston (Tufts = + 0.9, BC = + 0.8). BU would be right in there, too, 3.4 / 4.2 = + 8, as would Northeastern, 3.1 / 4.0 = + 0.9, but they’re not top 30 (or top 31) schools.</p>

<p>“The U.S. News ranking formula gives significant weight to the opinions of those in a position to judge a school’s undergraduate academic excellence.”</p>

<p>Thanks for pointing that out that flaw, which results in the nonsense bclintonk noted. As Reed’s president pointed out in [Is</a> There Life After Rankings? - Magazine - The Atlantic](<a href=“http://www.theatlantic.com/magazine/archive/2005/11/is-there-life-after-rankings/4308/]Is”>Is There Life After Rankings? - The Atlantic):

</p>

<p>A new article on the topic: [World</a> Yawns as Harvard Tops U.S. News & World Report Rankings | The Atlantic Wire](<a href=“http://www.theatlanticwire.com/opinions/view/opinion/World-Yawns-as-Harvard-Tops-US-News--World-Report-Rankings-4732]World”>http://www.theatlanticwire.com/opinions/view/opinion/World-Yawns-as-Harvard-Tops-US-News--World-Report-Rankings-4732)</p>

<p>

</p>

<p>Clinton, I am afraid that we could not disagree more. </p>

<p>Over the years, I have often posted about the differences in PA between Mudd and Smith and used as the clearest example of geographical and gender cronyism. Even if I believed that the PA was a proxy for faculty scholarship, I do not see how a school such as Smith deserves a PA superior to Mudd or Pomona. </p>

<p>But that is not even the point, the PA represents more than faculty scholarships, and there are no reason to overlook the lower selectivity of the student body at Smith. Frankly a school that is forced to accept about 50 percent of applicants and trails all of its peers in standardized testing does not deserve such as PA, unless there is something really wrong with the responders’ integrity.</p>

<p>

</p>

<p>But xiggi, standardized test scores and admit rates are already counted separately in the US News ranking. Why should they be double-counted by factoring them into PA scores as well? Without PA scores there’s absolutely nothing in the US News ranking going to faculty quality. I acknowledge that faculty scholarship represents only one dimension of faculty quality, but it’s an important dimension, and it’s one that college and university presidents and provosts keep a close eye on. Because it’s really the only dimension of faculty quality that’s transparent to faculty and administrators at rival institutions, it has to be the principal basis for PA ratings. And that’s just fine by me, because it gets at least one key dimension of faculty quality into the equation. Unless you think the faculty just don’t count for anything, as some people on CC seem to think (present company excluded).</p>

<p>Look, I know US News asks presidents and provosts to rate schools based on overall quality. But there’s no way they can do that; US News is asking the impossible of them. Some don’t even bother to try. But of those who do try, what are they going to base their assessment on? Well, the one thing they DO know something about—at least at the research university level—is which faculties are the envy of the academic world, which are making waves with their scholarship, which regularly eat their lunch in faculty recruitment/retention battles, and which have the up-and-coming faculties that they themselves raid for academic talent. They actually know a LOT about this because it’s their bread-and-butter. It’s what they do for a living, for gosh sakes. It’s no different than asking the managing partner of a law firm which law firms have the best lawyers. They know because they NEED to know in order to do their job. Opinions may vary a bit from managing partner to managing partner, but aggregate those opinions and you’ll get a composite that most will agree is a pretty accurate representation of the pecking order. That’s something valuable to know in academia as well. It has nothing to do with “integrity.” It has to do with expert knowledge and professional judgment.</p>

<p>

</p>

<p>The problem with citing Reed’s president on this question is that he’s unrepresentative: unlike most LAC presidents and provosts, Colin Diver has spent most of his life on law school faculties as as dean of several law schools. He simply hasn’t spent the major part of his career at LACs as most of the others have, in competition with other LACs in faculty recruitment and retention battles, so he knows far less than most about other LACs. Now I grant you, no one can know everything there is to know about all 220 of them, and they should reply “don’t know” to many or most; even US News says they should do that (and Diver should know better than to do what he’s doing, because the instructions are clear). But anyone who’s been in the business any length of time will have a pretty good bead on the 30-50 schools nearest them in the packing order. If they’ve been paying attention, that is.</p>

<p>“But xiggi, standardized test scores and admit rates are already counted separately in the US News ranking. Why should they be double-counted by factoring them into PA scores as well?”</p>

<p>Exactly correct bclintonk.</p>

<p>“Diver should know better than to do what he’s doing, because the instructions are clear”</p>

<p>He should know better than not returning the survey? 52% don’t return it. The majority of the Annapolis Group doesn’t participate. Just because the instructions are clear doesn’t seem (to me, at least) to be a reason to return it. He doesn’t seem so unrepresentative in his decision.</p>

<p>

</p>

<p>Where’s the rest?</p>

<p>Please post the remaining top 50.</p>

<p>

</p>

<p>Clinton, there is no basis whatsoever to support that the Peer Assessment has to be a proxy for the quality of faculty. It is supposed to be an assessment of the academic quality. </p>

<p>You illustrate the basic problem of the current version of the PA … its has such a broad definition that it allows anyone to make it WHATEVER he or she wants! For some it is the number of distinguished programs and for others it is the perception of the reputation (or something even sillier.) </p>

<p>Fwiw, we disagree and we will have to agree to disagree. I DO believe that the quality of the faculty should be play a large role in the PA. And why I have repeatedly suggested an expansion of the PA will clear categories of tangibles and intangibles. However, I also believe that it is impossible to evaluate the academic quality of a school without considering its student body, nowithstanding that the selectivity index has a separate category or not. As a matter of fact, it is pretty easy to establish a correlation between the PA and the selectivity of an institution, except for a number of outliers.</p>

<p>

</p>

<p>Clinton, is Diver complaining about the clarity of the instructions? And, given the total absence of clear instructions, he probably should! However, his position is that the respondents are NOT in a position to evaluate the 200 plus schools on the survey. </p>

<p>Fwiw, given the recent publicity surrounding the PA survey, it has become easier to actually see one and read the questionaire. In the past, most people were idly speculating about the questions on the actual. Something that allowed people who actually had seen one to realize how little was known about the specific questions. </p>

<p>Have you read one survey?</p>

<p>

</p>

<p>In support to what I posted earlier regarding the PA including both quality of faculty and quality of graduates, allow me to quote tidbits from the past:</p>

<p>

</p></li>
</ol>

<p>

</li>
</ol>

<p>thanks for the list bclintonk</p>

<p>i think BC and tufts deserve higher PA scores</p>

<p>Xiggi, Peer assessment is opinion. There is no right or wrong answer. You can form your opinion from any criteria you deem important. But at the end of the day you’re really arguing with collective opinion of over 2000 people. Kind of silly to argue about the outcome.</p>

<p>“You can form your opinion from any criteria you deem important.”</p>

<p>Trouble is, many uniformed think there is high value in what USNWR editors deem important.</p>

<p>

</p>

<p>Um . . . no. Sorry, I do this in my spare time, of which I have precious little. You’re welcome to do it yourself; the data’s all out there. Or give me some time and I may get around to it eventually. But you can’t just order it up like that.</p>

<p>

</p>

<p>I never suggested PA “HAS TO BE a proxy for faculty quality.” It doesn’t have to be. You’re right, US News gives the people who fill these things out leeway to consider whatever they want, and probably some do double-count by considering things like SAT scores and class sizes that are already counted independently in the US News ranking. I’m just making the empirical claim that because college & university presidents & provosts DO have a basis for assessing the quality, productivity, and impact of faculty scholarship—and because this is something they do, in fact, track very closely—that PA DOES in fact end up being largely, though not exclusively, a proxy for GC assessments of faculty scholarship. Bottom line, they just have no way of determining how faculty at other institutions perform in the classroom, or really any other aspect of educational quality at other institutions, apart from the usual CC-type gossip. So, like the rest of us, they’ll naturally tend to rely on what they DO know as a heuristic. In this case what they do know is faculty scholarship. At least, that’s my empirical claim, which obviously I can’t prove (though I think it stands to reason). I do think if you track PA scores you’ll find they correlate very closely with NRC rankings of faculty quality, and US News rankings of graduate program quality (for schools with grad programs), and similar measures—all of which, I submit, are basically measuring, albeit indirectly, the quality, productivity, and impact of faculty scholarship. So I take that as at least loose confirmation of my basic empirical claim.</p>

<p>Is PA a perfect measure of faculty quality? Of course not. It’s one-dimensional as I said, focusing on scholarship and not teaching or service, the other traditional elements in the triad of faculty responsibilities. And as UCB says, ultimately it’s just opinion; there’s no objective right or wrong about it. But we rely on expert opinion all the time in other contexts. Why is that so unusual or disturbing here?</p>

<p>Look, my goal here is not to defend PA ratings. They’re of limited utility—as are ALL the metrics US News uses. They could be improved, and should be. But PA scores, understood as an imperfect but serviceable proxy for college & university administrators’ best assessments of faculty quality, do add something to the US News ranking, namely a crude measure of faculty strength. PA adds far more to the US News ranking, IMO, than the GC rankings that have now supplanted a portion of PA. There are some real howlers in that HS GC rating. About which more soon.</p>

<p>From Columbia and Stanford’s wiki pages, this is what they boast about the respective schools. Columbia’s IS pretty DAMN impressive

</p>

<p>Stanford’s accomplishments tend to cluster solely in the science/technology area

</p>

<p>Really, the two schools are different, and depending on who you are, you could perceive Columbia as better or Stanford as better. Like, if I was a diplomat, I would probably think Columbia is a lot more prestigious.</p>

<p>

haha that’s the biggest joke I’ve heard all day. Maybe you should read the whole wikipedia page next time before jumping to conclusions. Just one factoid you probably missed: for much of the Rehnquist Court, 4 out of the 9 justices received Bachelors degrees from Stanford. </p>

<p>

Agreed. From what I’ve heard their campus cultures are very different.</p>

<p>“Stanford’s accomplishments tend to cluster solely in the science/technology area”</p>

<p>That goes to show how ignorant you are, since you obviously consider business, law, education, government, history, English, etc. sciences/technologies.</p>