US NEWS Rankings: What Would They Look Like Without Peer Assessment Score?

<p>Hey CC-ers!</p>

<p>It's been a while since I've posted around here, but I'm curious if anyone who has full access to this year's US News college rankings could calculate what the top schools would look like without the peer assessment score. </p>

<p>In other words, I am still perplexed as to what statistically significant data the head of one institution would be able to offer as to the quality of another, particularly when the majority of universities are outside of their intimate familiarity. </p>

<p>If anyone has access to the full data and wouldn't mind crunching some numbers, this would be really fascinating to see!</p>

<p>How much does the “peer assessment score” matter? Just curious.</p>

<p>Faculty members who take part in the peer assessments are well-aware of the strengths/weaknesses of competing institutions… they need to be in order for their schools to remain competitive. Therefore, the peer assessments reflect which schools are seen as offering competitive curriculum, quality teaching/research, facilities, and labs to students. They [the faculty members] take in to account a wide array of factors when making peer assessments.</p>

<p>Yeah right! What a crock! In addition to be widely incorrect on the assumptions, it might help understanding who fills the survey and what question is answered. Faculty members?</p>

<p>Agree with Xiggi. </p>

<p>USNWR should be transparent, publishing their “questionnaire” or whatever it is they use to garner PA, and info on whom exactly (faculty or staff person) is answering it.</p>

<p>Mom- The peer assessment score accounts for 2/3 of the “undergraduate academic reputation” score (the other 1/3 is calculated from high school guidance counselor ratings). The UAR score is 22.5% of the overall score, almost double the 12.5% attributed to student selectivity (SAT scores, acceptance rate, etc).</p>

<p>Fractal- While I would love to believe that your explanation holds true (in an ideal world, everyone completing the peer assessments would be extremely informed and thoughtful in their scores), that can’t be the case. According to the US news website, “For the Best Colleges 2013 rankings in the National Universities category, 842 top college officials were surveyed in the spring of 2012 and 53 percent of those surveyed responded. Each individual was asked to rate peer schools’ undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished).”</p>

<p>The fact that half of these “top college officials” even responded makes me question the data, but the larger concern is that this cohort of selected academics clearly have biases and subjective motives behind their ratings. They also may have an artificially heightened sense of familiarity with aspects of an institution that can only be justly evaluated by the students experiencing the education themselves.</p>

<p>If US News is going to give the most weight of their methodology to such a purely subjective measure, they should at least focus more on the outcomes of the school’s value (e.g. take executives from top companies who have hired graduates of these institutions) rather than what resembles more of a bizarre popularity contest. </p>

<p>John Tierney wrote an excellent piece in the Atlantic today ([Your</a> Annual Reminder to Ignore the U.S. News & World Report College Rankings - John Tierney - The Atlantic](<a href=“Your Annual Reminder to Ignore the U.S. News & World Report College Rankings - The Atlantic”>Your Annual Reminder to Ignore the U.S. News & World Report College Rankings - The Atlantic)) and he does summarizes these concerns quite nicely:</p>

<p>"A very substantial chunk (22.5 to 25 percent) of an institution’s ranking comes not from any hard data but from a “reputational” measure, in which U.S. News solicits “peer assessments” from college presidents, provosts, and admissions directors, as well as input from high-school counselors. U.S. News claims that by giving “significant weight to the opinions of those in a position to judge a school’s undergraduate academic excellence,” the rankings allow for the inclusion of “intangibles” such as “faculty dedication to teaching.” Critics say this component turns the rankings into a popularity or beauty contest, and that asking college officials to rate the relative merits of other schools about which they know nothing becomes a particularly empty exercise because a school’s reputation is driven in large part by – you guessed it – the U.S. News rankings. According to Malcolm Gladwell, this reputational measure is simply a collection of “prejudices” that turn the U.S. News rankings into a “self-fulfilling prophecy.”</p>

<p>I think it’s entertaining to have the peer assessment scores, and it clearly creates an element of allure or else US News wouldn’t weight it so heavily in their rankings. What I am interested in is looking at the purely objective components of the rankings, and seeing that the list would look like without this questionable category.</p>

<p>Here’s an article that shows what the peer assessment ratings were for the top schools last year. [Which</a> Universities Are Ranked Highest by College Officials? - Morse Code: Inside the College Rankings (usnews.com)](<a href=“http://www.usnews.com/education/blogs/college-rankings-blog/2013/02/28/which-universities-are-ranked-highest-by-college-officials]Which”>http://www.usnews.com/education/blogs/college-rankings-blog/2013/02/28/which-universities-are-ranked-highest-by-college-officials) As expected, the HYPSM schools have the highest peer rankings.</p>

<p>I doubt they’ve changed. When USN&WR started their rankings way back when, it was based solely on peer rankings. But they kept getting the same results for the top schools. So after a few years they added a number of other factors and have been constantly changing them ever since. It’s a lot more exciting when a school like U of Chicago can suddenly end up ranked ahead of Stanford (as was the case last year) or to have Princeton suddenly outshine Harvard (as is the case this year) than to just have the HYPSM schools ranked the same way year after year.</p>

<p>And by the way, there is no such thing as an “objective” formula. All you are doing is substituting factors the editor has chosen as important for those the school’s peers consider important. In the end it is all subjective.</p>

<p>

</p>

<p>Why would you assume they would be so blissfully ignorant about the competition in their own highly competitive industry? Granted, the leaders of major research universities have no reason to pay much attention to LACs (and vice versa), but research universities watch each other like hawks. They’re competing for federal research dollars, for one thing, and they know exactly where they and every other major research university stands in that competition. They’re competing for faculty, both at the entry level and for lateral hires; they know which of their own departments are in good shape relative to the competition, who’s ahead of them, who’s gaining on them, who’s fading, which schools can have them for lunch if they decide to raid their faculty, and which schools they can have for lunch. They’re competing for top graduate students, and they know which of their graduate programs are doing well in that competition and which aren’t, who’s doing better and who’s doing worse. They’re competing for undergraduates and they know which schools regularly beat them in that competition and which don’t. </p>

<p>The leaders of major research universities are frequently asked to serve on accreditation committees for other major research universities. In that capacity they are able to go over the capacities and operations of a major competitor with a fine-tooth comb, examining its strengths and weaknesses in great detail and with brutal candor. Granted, they get to see only a few of their competitors up close like that, but they do so as members of committees whose members have collectively examined dozens of other universities, and so there’s a lot of comparative benchmarking that goes on.</p>

<p>Some of the peer-watching is highly formalized in other ways. For example, our state flagship, the University of Minnesota, regularly compares itself on all sorts of metrics to a self-identified “comparison group” of 10 major public research universities (UC Berkeley, UCLA, Florida, Illinois, Michigan, Ohio State, Penn State, Texas, U Washington, and Wisconsin), all somewhat stronger than Minnesota in some, many, or all aspects of institutional strength. The U engages in this exercise for the purpose of measuring its own progress in achieving its strategic objectives and to gauge its standing relative to an aspirational group of peer institutions. Its leaders know who’s in that group and what are their greatest strengths and weaknesses; they know which schools didn’t make the cut because they’re not strong enough, and which schools were excluded from the comparison group because they have certain strengths that a public research university can’t expect to match or are just sufficiently different in character that the comparisons wouldn’t make sense.</p>

<p><a href=“https://www.irr.umn.edu/progress/progress2007/UMN_Metrics_Overview_June_2008.pdf[/url]”>https://www.irr.umn.edu/progress/progress2007/UMN_Metrics_Overview_June_2008.pdf&lt;/a&gt;&lt;/p&gt;

<p>It’s a common theme on CC that the leaders of universities can’t possibly know enough about other universities to offer informed opinions about them. That’s silly. It’s a highly competitive business. Of course they know. </p>

<p>Granted, the President and Provost of the University of Minnesota may not know very much about the University of Wyoming, but they know enough to know they don’t need to know more because there’s virtually no area in which the University of Wyoming is a major competitive threat to the University of Minnesota; its faculties are weaker across the board (and if there are any strong ones, I’m confident the President and Provost at Minnesota would have heard about it), its graduate programs are not competitive with Minnesota’s, and it doesn’t draw a strong national student body at the undergrad level. Yet there’s no indication that it’s on the verge of collapse, either. So when the PA survey asks them to grade the University of Wyoming on a scale of 1 (marginal) to 5 (distinguished), it’s neither going to get a 1 (a score reserved for the truly troubled institutions), nor is it going to get a 4 or 5 (scores reserved for the strongest institutions). They’re going to give it a 2 or a 3. And guess what? When that exercise is repeated by several hundred research university officials, the University of Wyoming comes out with a PA score of 2.6–pretty much just exactly what you’d expect. The truth is, you and I have enough information to engage in that exercise, and people on CC routinely engage in that kind of exercise on a daily basis. So why would you assume university presidents and provosts are less well informed than we are, when it comes to the relative pecking order in the industry they are paid to keep on top of?</p>

<p>Socal- Agreed that it is difficult to achieve objectivity in rankings, but my main point is that within the already subjective algorithm where US News is dictating importance of various data points, there IS a significant distinction between objective data points (e.g. incoming class SAT scores, graduation rate), and subjective ones (namely, peer assessment score).</p>

<p>I think it’s odd to have profs assessing their competition and providing a score for each. </p>

<p>First of all, it assumes that profs really know a lot about their competitors’ programs, and that assumes that these folks spend a good bit of their time actually looking beyond their own backyards. I think many devote their time to their jobs, their research, and to their homelife, with not much time to be assessing the competition.</p>

<p>Or maybe I’m not understanding this whole thing…which is likely. lol</p>

<p>

</p>

<p>Actually, there is little–if ANY–faculty involvement in the completion of the Peer Assessment surveys:</p>

<p>

</p>

<p>[How</a> U.S. News Calculated the 2014 Best Colleges Rankings - US News and World Report](<a href=“http://www.usnews.com/education/best-colleges/articles/2013/09/09/how-us-news-calculated-the-2014-best-colleges-rankings?page=4]How”>http://www.usnews.com/education/best-colleges/articles/2013/09/09/how-us-news-calculated-the-2014-best-colleges-rankings?page=4)</p>

<p>

</p>

<p>And you can manipulate objective data points to fit the curve, so to speak. The objective data that USNWR uses for their national and regional rankings allows them to skew the rankings in order to fit the general public’s view of what the top ranked colleges should be… it’s silly.</p>

<p>Most people don’t know anything about 99.9% of colleges out there, except the one(s) they went to or are familiar with. Why then are we trying to match the rankings with public perception?</p>

<p>bclintock- similar to my thoughts on fractal’s post, I would love to believe that everyone completing those peer assessment ratings are as well informed about relevant peers as you insinuate. However, you stole the words from my mouth in every “granted” counterpoint that you make, reinforcing that there are significant flaws to this system. </p>

<p>I am equally curious as xiggi and gondaline about the need for more transparency, not only on the questions these academics are being asked, but I also want to know things like how many schools the average respondent rates , or whether the schools they rate are even within their “competition”/relevant tiers. I am not discrediting how qualified these individuals are to evaluation certain institutions of which they have intimate knowledge/have served on committees/compete with for top faculty…but there is still incredible subjectivity, and I still think that an already faulty rating category within the methodology is worsened by all of these “granted” arguments.</p>

<p>

</p>

<p>Good catch… I guess I incorrectly lumped everyone directly involved in academia under “faculty members”.</p>

<p>True gojumbos. In fact perhaps the greatest value of the USN&WR methodology is that the factors taken into account are both identified and consistently applied among the institutions. So I agree that it somewhat defeats the purpose of the whole exercise to add survey results to the formula, which would involve factors that are neither identified nor consistently applied.</p>

<p>That said, the reasoning behind the surveys might be similar to why human polls are included in the BCS football ranking formula. Strictly objective measures can sometimes produce odd results and you need an eyeball test to kind of smooth things out. And as bclintonk suggests, it can’t be all that hard to rank a school from 1 to 5, at least as it relates to the historically strong institutions.</p>

<p>And one tidbit to consider when looking at the peer rankings is how broad the gap actually is between the schools ranked 4.6 (Columbia, Cal Tech, Johns Hopkins, Cornell and U of Chicago) and those at 4.9 (Stanford, Harvard and MIT). Assuming all these schools got only 4’s or 5’s on their peer ratings, a school with a 4.6 would have had 4 out of every 10 peer institutions rating them as a second tier school (i.e., “4”). For the 4.9 schools, that would occur on only 1 of 10 surveys (and perhaps even less often, depending on rounding).</p>

<p>

</p>

<p>And along similar lines, we could start ranking automobiles by objective metrics such as: Level of wealth of car maker, successful completion rate of cars leaving assembly line, purity level of metal entering factory, etc. Metrics which, if chosen, would certainly benefit the rankings of the cars made by big, prestigious, long-standing automakers.</p>

<p>However, all of those metrics are meaningless to the person looking to buy a car! People want to know how safe and reliable the car is, how fuel efficient it is, quality of brakes, etc. If we rank using the wrong objective metrics, the rankings will be misleading and/or meaningless. </p>

<p>And this is arguably what we see with the national and regional rankings - a bunch of largely irrelevant objective metrics that don’t really measure anything taking place within the colleges (the types of things most families are concerned with), let alone measure the overall quality of the colleges.</p>

<p>Could someone post the PA and guidance counselor numbers here of the top forty or so schools?</p>

<p>Don’t want to add to USNWR’s coffers by buying the darned thing.</p>

<p>“And this is arguably what we see with the national and regional rankings - a bunch of largely irrelevant objective factors that really don’t measure anything taking place within the colleges (the things most families are concerned about), let alone measure the overall quality of the colleges.”</p>

<p>Excellent comment!</p>

<p>And this is arguably what we see with the national and regional rankings - a bunch of largely irrelevant objective metrics that don’t really measure anything taking place within the colleges (the types of things most families are concerned with), let alone measure the overall quality of the colleges.</p>

<p>I agree. People wrongly think that the rankings mostly only factor the academic quality of each school. So, naturally, they think that instruction is better at school #32 than it is at #52…when that’s not necessarily true.</p>