US NEWS Rankings: What Would They Look Like Without Peer Assessment Score?

<p>

</p>

<p>All 200+ of them? All of them? </p>

<p>Let’s accept your take on face value. I will grant that you, as a faculty member, know a LOT more about your competition and some of your peers. But how many institutions does that amount to? A handful? A dozen? Several dozens? </p>

<p>And, even if a faculty member had the appropriate knowledge of the entire realm of his or her programs, does any of that knowledge transfers to the … office that ultimately answers a simplistic question about “distinguished programs” without EVER defining the criteria or even describing the programs. Not to mention that the responders do not have to follow the instructions about confining their responses to the undergraduate instruction. </p>

<p>The saddest part is that the PA COULD be a valuable tool, as opposed to the manipulated and worthless element it is today. It could be valuable if expanded to offer a dozen category AND made entirely public, in a drastic departure from the current use that yielded the abuses at schools such as Wisconsin and Clemson. </p>

<p>Will that ever happen? Nope! And this because it would not serve the colleges as they still believe in secrecy and that sharing as little valuable information as possible is the way to go. If the schools were forced to disclose their (simplistic) responses, the number of surveys would drop to an even more ridiculous low number.</p>

<p>“It could be valuable if expanded to offer a dozen category AND made entirely public, in a drastic departure from the current use that yielded the abuses at schools such as Wisconsin and Clemson.”</p>

<p>Two schools out of hundreds who respond will become outliers and not affect the overall accuracy of the PA. Schools like Emory, who had manipulated objective data for years, are the real abusers. That is why imho ALL information provided by USNWR is somewhat suspect.</p>

<p>^</p>

<p>Have some Kool-Aid, RJK!</p>

<p>

</p>

<p>Faculty know who’s who (and where they are) in their own field of study. In fact, most of them are obsessed with it. They know where they got their own graduate degrees, and they didn’t get there by random assignment; they competed to get into the best program they could, and they probably went to the best program they got into. They know which programs were stronger and which weaker. Those things changes over time, albeit usually slowly. They pay attention to those changes. And it comes up again every time they need to make an entry-level hire; they’re going to try to hire the strongest candidates coming out of the strongest programs, and that requires them to know which are the strongest programs. They’ll consider the relative strengths of undergraduate programs in deciding who to admit to their own graduate programs, and they’ll advise their own undergrads as to which are the best graduate programs in the field. They’ll know which schools are beating them in competition for faculty and for top graduate students, and which they’re regularly beating. They know which schools they’d consider a lateral hiring offer from because it would mean a bump up in prestige within the field, and which they wouldn’t ever consider. They read each other’s scholarship all the time, they know whose work is most influential in the field and where the greatest concentrations of influential scholars are. They know whose work they themselves admire, whose is mediocre, and whose they think is overrated. They attend academic conferences at each others’ schools where they compare notes and gossip about which top scholars are going where, which departments are up and which are down, who’s doing the most interesting and path-breaking work. It’s their business to know these things. If they don’t they’re not keeping current with their field, and they’re doing their jobs poorly.</p>

<p>Honestly, this is like saying the Ford dealer can’t possibly know anything about the product the Toyota dealer is selling because the Ford dealer is too busy selling Fords. Nonsense. It’s a competitive business. If you don’t know who the competition is and what they’re doing, you lose.</p>

<p>Now it’s true the US News PA doesn’t rely on faculty peer assessments. I wish it did. But it’s the provost’s job to make sure all of the university’s faculties are strong, and that essentially requires aggregating all that information coming in from various schools and departments and making comparative judgments about strengths and weaknesses for purposes of allocating resources (e.g., which departments need shoring up and need approval to make additional hires; you can’t really decide that without knowing how they’re doing with respect to the competition). The president needs to know these things in a general way as well. I must say, though, that I’ve never understood US News’ decision to include the director of admissions in the PA survey.</p>

<p>College Rankings Statement (2007)</p>

<p>The undersigned presidents agree that prospective students benefit from having as complete information as possible in making their college choices.</p>

<p>At the same time, we are concerned about the inevitable biases in any single ranking formula, about the admissions frenzy, and the way in which rankings can contribute to that frenzy and to a false sense that educational success or fit can be ranked in a single numerical list.</p>

<p>Since college and ranking agencies should maintain a degree of distance to ensure objectivity, from now on data we make available to college guides will be made public via our websites rather than be distributed exclusively to a single entity. Doing so is true to our educational mission and will allow interested parties to use this information for their own benefit. If, for example, class size is their focus, they will have that information. If it is the graduation rate, that will be easy to find. We welcome suggestions for other information we might also provide publicly.</p>

<p>While we respect our colleagues who have announced that their schools will no longer respond to a survey on the merits of peer institutions, we believe that such judgments, if well informed, can usefully complement objective markers, since surely some institutions perform above their objective measures, others below. Still, no one should feel obligated to fill out reputational surveys, and anyone who chooses to should only do so for those institutions on which they have grounds for judgment.</p>

<p>We commit not to mention U.S. News or similar rankings in any of our new publications, since such lists mislead the public into thinking that the complexities of American higher education can be reduced to one number.</p>

<p>Finally, we encourage all colleges and universities to participate in an effort to determine how information about our schools might be improved. As for rankings, we recognize that no degree of protest will make them disappear, and hope, therefore, that further discussion will help shape them in ways that will press us to move in ever more socially and educationally useful directions.</p>

<p>Anthony Marx, Amherst College<br>
Elaine Hansen, Bates College<br>
Barry Mills, Bowdoin College<br>
Nancy Vickers, Bryn Mawr College
Robert Oden, Carleton College<br>
William D. Adams, Colby College
Rebecca Chopp, Colgate University<br>
Thomas W. Ross, Davidson College
Russell Osgood, Grinnell College<br>
Joan Hinde Stewart, Hamilton College<br>
Stephen Emerson, Haverford College<br>
Ronald Liebowitz, Middlebury College
David Oxtoby, Pomona College<br>
Alfred Bloom, Swarthmore College<br>
James Jones, Trinity College<br>
Catharine Hill, Vassar College
Kenneth Ruscio, Washington and Lee University<br>
Kim Bottomly, Wellesley College
Michael Roth, Wesleyan College<br>
Morton Schapiro, Williams College</p>

<p>

</p>

<p>Not true that the so-called “objective” metrics in the US News ranking are “consistently applied among the institutions.” US News accepted self-reported and unverified data from colleges and universities, which have a self-interest in massaging the data to boost their own rankings. And there are lots of ways they can do this. A number of schools have admitted outright falsification of SAT/ACT medians, but short of that, some schools reportedly omit SAT/ACT scores of recruited athletes from their median calculations. Some superscore, some don’t; some might not superscore for admissions purposes, but superscore for purposes of US News rankings. Some go “test-optional” or “test-flexible” which is believed to skew SAT/ACT medians higher because only the applicants with top SAT/ACT scores will send in their scores. </p>

<p>Some schools submit “estimates” rather than actual counts of the percentage of enrolled freshmen who were in the top 10% of their HS class.</p>

<p>The “faculty resources” data, accounting for 20% of the total US News ranking, is not comparable from one institution to the next because of differences in who’s counted as faculty. The instructions on the common data set are very clear that faculty in “stand-alone” graduate programs like law schools, medical schools, and MBA-only business schools are to be excluded, but some private schools include them anyway, which both artificially lowers their student-faculty ratio and inflates their average faculty compensation figures insofar as law, medical, and business faculty are generally paid more than people in other disciplines. Many private schools also artificially shrink their student-faculty ratios by excluding all graduate students from their “student” count, even though the instruction clearly state that graduate students taught by faculty who also teach undergrads (which would generally include all the arts and sciences as well as engineering) should be counted as “students” for purposes of calculating the student-faculty ratio. If they followed the instructions, some of these private research universities would have student-faculty ratios that more closely resembled those of the top publics, which generally do follow the instructions.</p>

<p>In short, there’s nothing very “objective” about the so-called objective US News metrics. And they’re certainly not calculated in a uniform way across all institutions.</p>

<p>“In short, there’s nothing very “objective” about the so-called objective US News metrics.”</p>

<p>What do you think about the above statement Xiggi?</p>

<p>

</p>

<p>That is particularly poor analogy. The Ford dealer is competing against a Toyota dealer that is probably located in his area. He is dealing with customers who come in to compare values, cost, financing, etc. Also, those customers come in armed with all kind of data from outfits such as Edmunds, and plenty of others. </p>

<p>Now, would a Ford dealer in Michigan know much about the … service provided by a Toyota dealer in El Paso, Texas? Would he know much about how effective the training of the vendors is at that dealer? How effective the advertising is thousands of miles away? </p>

<p>Simply stated, there are NO Consumer’s Reports that dig deep into education, The closest we have is the USNews (can’t list the WM garbage here) and the IPEDS. In theory, a Provost or President could invest plenty of time and study the IPEDS report to death, and go well beyond the immediate lists of peers and competitors. But how … realistic would that be, especially considering that few are paid to maximize the accuracy of filling a survey that is known to be imprecise at best! </p>

<p>In the end, people tend to endorse the PA when their favorite schools shine in that metric. Well, except for the few who can recognize how horrible the PA really is, despite seeing how “their” school is ranked … first in that metric! :)</p>

<p>

</p>

<p>Well said. And I think much of this thread, including what you’ve said, highlights the need to change the “ranking” system to a “rating” system, whereby colleges are given some form of grade, rather than a rank. The ratings could even be broken down categorically to rate various elements of the colleges. This system would help to eliminate some the corruption and falsification of data. It would also provide a constructive way of encouraging lower performing schools to improve.</p>

<p>Of course none of this will ever happen because a rating system is not as sensational as a ranking system… therefore it doesn’t provide a way for companies like USNWR to make money.</p>

<p>

</p>

<p>Oh, RJK, I think I have answered that question in the past. I know that it is an oft-repeated argument of our Dubai friend. To keep simple, I do believe and know that schools have played fast and loose with the reporting requirements. My own UG school was involved in one of the most publicized (and incredibly misunderstood and misrepresented) scandals in misreporting admissions’ data. </p>

<p>Even if the falsely reported numbers had ZERO impact on the rankings, I found the reporting of false data both appalling and inadmissible. And I should add that the actions and reactions by the Board of my school were entirely misguided and poorly handled. In my eyes, CMC has the opportunity to show true leadership in pushing for schools --starting in the Claremont Consortium-- to clean house, and have ALL the upper management signing the darn surveys, and making them public in their entirety. </p>

<p>Sadly, there are no incentives for schools to join in such a movement, as playing games is entirely more rewarding. As far as your question about objective data, it remains that it is hard to cheat ad infinitum if showing INCREASING numbers is the ticket. Ultimately, the numbers DO catch up. Subjective “data” on the other hand can be manipulated forever and a day, as the PA has been and will be!</p>

<p>Probably not the answer you expected!</p>

<p>

</p>

<p>I’ll go you one better. Here are the top 46 “national universities” with their PA and GC ratings. These generally track the overall US News rankings pretty closely, except: 1) GCs are more generous graders overall; 2) GCs seem to favor smaller private institutions, especially in the Northeast (even top Midwestern schools like Chicago and Northwestern get GC scores notably below otherwise comparably ranked schools in the Northeast); 3) GCs are Ivy-obsessed, giving every Ivy a 4.8 or 4.9; the only other schools receiving those ratings are Stanford, MIT, and Johns Hopkins; 3) PA scores seem to favor high-output research universities with faculty strength in all disciplines, and disfavor smaller schools with less research capacity and/or less across-the-board faculty strength; this is true on both the private side (e.g., Dartmouth, Rice) and on the public side (e.g., William & Mary).</p>

<p>University PA / GC</p>

<p>Princeton 4.9 / 4.9
Harvard 4.9 / 4.9
Yale 4.8 / 4.9
Columbia 4.6 / 4.9
Stanford 4.9 / 4.9
Chicago 4.6 / 4.6
Duke 4.4 / 4.7
MIT 4.9 / 4.9
Penn 4.4 / 4.8
Caltech 4.7 / 4.7
Dartmouth 4.3 / 4.8
Johns Hopkins 4.6 / 4.8
Northwestern 4.4 / 4.6
Brown 4.4 / 4.8
WUSTL 4.1 / 4.5
Cornell 4.5 / 4.9
Vanderbilt 4.1 / 4.7
Rice 4.0 / 4.5
Notre Dame 3.9 / 4.7
Emory 3.9 / 4.5
Georgetown 4.1 / 4.7
UC Berkeley 4.7 / 4.7
Carnegie Mellon 4.2 / 4.7
UCLA 4.2 / 4.5
USC 4.0 / 4.4
UVA 4.3 / 4.5
Wake Forest 3.5 / 4.4
Tufts 3.6 / 4.5
Michigan 4.5 / 4.4
UNC Chapel Hill 4.1 / 4.5
Boston College 3.6 / 4.4
Brandeis 3.6 / 4.2
William & Mary 3.7 / 4.5
NYU 3.8 / 4.4
U Rochester 3.4 / 4.0
Georgia Tech 4.1 / 4.3
Case Western 3.5 / 4.2
Penn State 3.6 / 4.1<br>
UC Davis 3.8 / 4.1
UC San Diego 3.8 / 4.0
Boston U 3.5 / 4.2
Lehigh 3.2 / 4.0
RPI 3.4 / 4.1
UC Santa Barbara 3.5 / 3.9
Illinois 3.9 / 4.1
Wisconsin 4.1 / 4.2</p>

<p>Schools scoring 0.9 higher by GCs than by PA: Wake Forest, Tufts
Schools scoring 0.8 higher by GCs than by PA: Notre Dame, Boston College, William & Mary, Lehigh
Schools scoring 0.7 higher by GCs than by PA: Case Western, Boston U, RPI
Schools scoring 0.6 higher by GCs than by PA: Vanderbilt, Emory, Georgetown, Brandeis, NYU, Rochester
Schools scoring 0.5 higher by GCs than by PA: Dartmouth, Rice, Carnegie Mellon, Penn State
Schools scoring 0.4 higher by GCs than by PA: Penn, Brown, WUSTL, Cornell, USC, UNC Chapel Hill, UCSB</p>

<p>

</p>

<p>[Is</a> There Life After Rankings? - Colin Diver - The Atlantic](<a href=“Is There Life After Rankings? - The Atlantic”>Is There Life After Rankings? - The Atlantic)</p>

<p>Thanks for the listing and analysis, bclintonk</p>

<p>Doesn’t $/endowment and $ spent on facilities really drive up rankings as well?
I saw a highly ranked school that doesn’t have necessarily great academics…it’s all in the facilities.</p>

<p>Learned a long time ago from people in the advertising business etc that these rankings are a business in and of themselves and mean nothing truly for the avg student truly to sort out which college to attend.</p>

<p>The Atlantic tried and failed at the rankings game. So there is that.</p>

<p>GC stands for Generally Clueless. It is quite an achievement for Morse to have discovered a way to make his Reputational Index even worse than it used to be with 100 percent PA. Of course, looking how Morse “picked” those GC is indicative of how the USNews is … improving. Perhaps, the USNews should think about hiring Jay Mathews from the ashes of Newsweek and WaPo. Then they’ll truly know what hitting the bottom means.</p>

<p>“I’ll go you one better. Here are the top 46 “national universities” with their PA and GC ratings. These generally track the overall US News rankings pretty closely…”</p>

<p>Which just tells me that HS counselors base their opinions on the overall USNWR rankings. Which just reafirms my opinion that their opinions are worthless. It also confirms my opinion that the PA scores of hundreds of administrators are overall pretty accurate. They aren’t manipulated by USNWR overall rankings when making their opinions known.</p>

<p>^^</p>

<p>What would you expect the GCs to use as “sources?” </p>

<p>A large number of the responding GCs are asked to rank schools to which they hardly ever send a single student. Again, take a look at the schools that are polled, and you will have a better idea on why the results are what they are.</p>

<p>While I have not checked the 2014 results for the LACs, if I remember correctly the highest ranked schools were all military academies in 2013. A bit of a contradiction to one assessment posted above.</p>

<p>Yep, I did not misremembered as Dubya might say:</p>

<p><a href=“http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-liberal-arts-colleges/high-school-counselor[/url]”>http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-liberal-arts-colleges/high-school-counselor&lt;/a&gt;&lt;/p&gt;

<p>Rank </p>

<h1>1 United States Military Academy 4.8</h1>

<h1>1 United States Naval Academy 4.8</h1>

<h1>3 United States Air Force Academy 4.7</h1>

<h1>4 Claremont McKenna College 4.6</h1>

<h1>4 Harvey Mudd College 4.6</h1>

<h1>4 Williams College 4.6</h1>

<h1>7 Amherst College 4.5</h1>

<h1>7 Bowdoin College 4.5</h1>

<h1>7 Pomona College 4.5</h1>

<h1>7 Swarthmore College 4.5</h1>

<h1>7 Vassar College 4.5</h1>

<h1>7 Wellesley College 4.5</h1>

<p>"What would you expect the GCs to use as “sources?” </p>

<p>USNWR PA scores. :-)</p>

<p>I am consistently surprised by the large number of universities that are tied in the rankings. It is not uncommon to see a 4-5 way tie at a particular rank. With all the data, factual and subjective, used, it would seem that would be a less likely event than a common one.</p>