US NEWS Rankings: What Would They Look Like Without Peer Assessment Score?

<p>Guess the OP’s request for some number-crunching minus the PA scores has not been granted…</p>

<p>can anyone post the selectivity data/rank for top 10 schools? please</p>

<p>

</p>

<p>Oh, c’mon, xiggi. This is actually a LOT easier than you make it out to be. There are about 250 universities that offer programs in my discipline. I could probably name almost all of them off the top of my head, and I know enough about at least 90% of them–if not 100%–to rate them on a 1-to-5 scale (in fact, I have occasionally been asked to do that for a professional survey that’s done periodically, and I never found it particularly difficult to do). </p>

<p>I’ve studied at 2 of them and taught at 3 of them; those programs I know inside and out. </p>

<p>When I went on the entry-level job market I interviewed with about 30 of them (some overlap with the previous 5), and I didn’t go into those interviews blind; with the aid of a ton of data and the wisdom of my own faculty mentors, I studied the entire market from top to bottom, knew which schools would offer A+ career opportunities and why, which A, which A-, which B, on down to those I wouldn’t even consider an offer from. I’ve had discussions about possible lateral moves with probably 10 to 12 schools over the years, each occasioning a pretty thorough investigation of the school’s strengths and weaknesses. I’ve also helped place several dozen of my own students in faculty positions over the years at institutions at all levels within the pecking order; in the course of doing so I’ve discussed with them the strength and weaknesses of probably 100 or 150 schools. I still maintain fairly frequent contact with many of those ex-students, and they’re only too happy to spill their guts about their current institutions. </p>

<p>I’ve served on our faculty appointment committee, sometimes chaired it, and in that capacity reviewed probably several thousand C.V.s of entry-level candidates, made judgments about the strengths and weaknesses of the graduate programs they’re coming out of, winnowed those numbers down to interview the strongest candidates, talked to their faculty mentors in great detail about the candidates, their academic training, and their intellectual projects, talked to the candidates in great detail about what they did and who they studied with in graduate school, made offers to some of them, knew which schools were competing for their services and where they ultimately ended up, and followed what they’ve done since. I can tell you in great detail about each of the 15 or 20 schools that are usually or often able to outcompete us for the faculty members and the students we want, and why they are able to do so; I can also tell you in great detail about another 12 or 15 schools that are roughly on our level, such that faculty recruiting contests usually come down to things like geographic preference, spousal employment opportunities, and the like; and I can tell you enough about each of the 215 or 220 schools that we’ll best in almost any competition that you’ll understand why we’re almost universally perceived to be better than they are. </p>

<p>In my time as a faculty member at three different institutions we’ve also made probably at least 50 lateral hires, each involving a national search and reviews of dozens, sometimes hundreds of CVs, leading to detailed interviews with probably 300 to 400 candidates over the years, each eager to explain why he or she is willing to consider relocating to our institution from his/her present one (better pay, more prestige, better colleagues, better students, and geographic preferences are often factors, but in many cases they’re only too happy to air their present institution’s dirty linen). And when we have successfully made those hires, I’ve gotten to know those people as colleagues, each bringing his or her own experience of schools attended and schools taught at, and happy to talk about the strengths and weaknesses of each. (Of course, each of my colleagues who was already here when I arrived has similar stories). I also know who are the people, on which faculties, whom we covet but will never be able to get to come here. </p>

<p>I’ve participated in academic conferences or faculty workshops at probably 60 or 70 schools by this stage of my career, met their faculty, chewed the fat about many things including what’s going on at their school, who’s doing what, what are the exciting new developments, what are the problems and challenges. I’m also a member of professional associations and attend professional meetings in my field. Through all that, I’ve gotten to know hundreds of professional colleagues at probably well over 100 schools, people I see and talk to fairly regularly either in person or by phone, sometimes by e-mail, usually in connection with some work-related thing but often the conversation extends to information exchange about what’s going on at each other’s schools. In fact, looking over the roster of schools that have programs in my field, I’d estimate that I’m on a first name basis with one or more faculty members at more than two-thirds of them, probably closer to three-quarters, some of them deans, a few provosts and presidents. I’ve organized faculty conferences and workshops at my own institution; I know who’s on the A-list in my specialty, who are the solid back-ups, who are the somewhat weaker people who might need to be invited anyway, and so on. I also attend conferences and workshops organized by my colleagues, and get a good sense of who’s who and what’s what in their specialties.</p>

<p>Then there’s the whole business of writing tenure letters, and soliciting tenure letters from the top people in the field, which I’ve had to do at times as a member and for a time chair of our tenure committee. That requires, inter alia, identifying the key people in a particular specialty, which tells you something about their institution. </p>

<p>And finally, I read. I read voluminously, some of it in the course of my own scholarly research (I need to keep abreast of developments in my field), some just purely for informational purposes, e.g., so I’m not missing any important new developments when I teach my classes. I know who’s producing that scholarship, which of it is good and which less good, which influential, which less so. I know where the highest quality work is coming from. Much of the best of it, not surprisingly, comes from the usual suspects, the Harvards, Stanfords, Chicagos, Berkeleys, and Michigans; some from my own institution and others similarly situated; some, but less, comes from less highly regarded institutions, and when it does, it gets noticed, and it’s usually not too long before the person or persons producing it start to get inquiries about possible lateral moves. There are several thousand active academics in my discipline, but more like several hundred who are actively and regularly producing high quality scholarly work at any given time. They tend to be known and coveted, and there is an active competition for their services, sometimes quite low-grade (because they’re giving off signals that they’re not presently interested in moving, or are already at such an exalted level that there’s no chance of moving them), sometimes open and intense.</p>

<p>Through all that, I have a pretty thorough knowledge of my field. All of it. Certainly enough to have informed opinions and to thoughtfully fill out a survey that asks me which are the most distinguished institutions (5’s), which strong but not quite at the level of a 5 (4’s), and at the other extreme which are marginal (1’s). I might have a somewhat harder time distinguishing some of the 2’s from some of the 3’s, but even there I think I know enough about most schools to make a pretty informed judgment call.</p>

<p>It never cease to amaze how, on a forum devoted to sharing information and expressing opinions about and dissecting the strengths and weaknesses of a wide variety of colleges and universities, so many people take the position that it’s impossible for college and university presidents and provosts to know very much about other colleges and universities. Apparently we on CC know something about the relative strengths and weaknesses of colleges and universities. But the people running them don’t? What a crock.</p>

<p>I think bclintonk and xiggi should start up their own rankings… TOGETHER (like Rudolph and Hermie!). Certainly both qualified.</p>

<p>How 'bout it? And your healthily divergent approaches would make them all the more valid?</p>

<p>

</p>

<p>I generally agree with the first part of your comment, rjk, but i’m not so sure that PA scores aren’t also influenced over time by overall US News rankings. I believe someone studied this in the law school context and found that over time, PA ratings of law schools came to more closely resemble the overall US News ranking of law schools. They called it the “echo chamber effect”–the people filling out the PA surveys were influenced by the US News rankings, and in turn reinforced and seemed to “validate” those rankings by echoing what they said.</p>

<p>On the other hand, GCs do have their biases as well. I think there’s a perceptible bias toward Ivies and the Northeast generally. Why do Chicago and Northwestern get 4.6’s while similarly ranked Ivies are getting 4.8’s and 4.9’s? Its obvious the GCs aren’t just going down last year’s US News ranking an assigning scores strictly on that basis. </p>

<p>

</p>

<p>I don’t think we have enough data or information on the details of US News’ methodology to crunch the numbers. I certainly don’t. They just describe broad categories and list the weights they assign to them. For factors like faculty compensation, you’d need actual data to work with, and a formula for how to evaluate it, and they don’t provide that. And when they say they assign a weight of 22.5% to graduation and retention rates, what does that mean, exactly? We know they assign Caltech’s 92% 6-year graduation rate a “graduation rate ranking” of 25; but then what’s the formula for how that ranking feeds into the final rating points upon which the ultimate ranking is based? They don’t tell us (at least, I’ve never seen it).</p>

<p>Clinton, even I do not say it often enough, I have tremendous respect for your opinion, and especially when I do not agree with your conclusions… In this case, I must admit to be impressed by the depth of your reply. I do not doubt for a second that you do know enough about your peers in THEIR and your department to express a well-researched and educated appraisal.</p>

<p>Unfortunately, that is NOT what USNEWS is interested in. They do NOT cull information that has such granularity. The survey is asking a simplistic question that covers the broad spectrum of ALL programs offered at a school. In so many words, while your knowledge might be instrumental in evaluating a couple of related programs, it might get buried and overwhelmed by the many unrelated programs. Of course, one could always believe that a school that is distinguished in a few programs must do so across the board. But this would be quite surprising at my undergraduate school, which is particularly angular and specific about its areas of excellence.</p>

<p>Again, I do respect your divergent opinion in this case, but still have to suggest that the Clemson-like response is the norm versus the exception when it comes to the PA and ancillary dedication to teaching survey. Perhaps without the malice.</p>

<p>Can’t believe no one has commented on Sue’s post (#25). It’s an agreement by a number of LACs to present their data and use the rankings in specific ways. Most notable is the fact that the schools that participated in this cooperative effort are 20 schools within the USNWR top-25 LACs (if you take out the military academies and a couple others, you’ve got the whole list covered). This, to me, reinforces the notion that colleges know who their cross-admit peers are and want to keep the group as predictable as possible. The goal seems to be to ensure that they all look out for each other in the pantheon of “top” colleges. My guess would be that the data would show each of these colleges giving higher-than-average marks to all the others in the group, even though there are many other great schools that deserve equally high peer-assessment rankings. They’re just not in this particular club.

</p>

<p>

</p>

<p>Yeah, we get it. The PA’s are not perfect… but neither are the objective metrics that go in to the national and regional rankings. </p>

<p>The answers to those “simplistic” PA survey questions are far more valuable to many families than the largely irrelevent objective metrics that USNWR cherry picks for its National and Regional rankings. Make no mistake about it, these rankings are designed specifically to promote highly selective, and wealthy schools.</p>

<p>And there’s nothing particularly wrong with that… The problem enters in when you start marketing these rankings to the entire college bound population as a one-size-fits-all list of “Best Colleges”. </p>

<p>Why do we need a one-size-fits-all rankings list anyway?</p>

<p>Sally, that issue was covered ad nauseam when it was new news in 2007. </p>

<p>[Annapolis</a> Group - Wikipedia, the free encyclopedia](<a href=“http://en.wikipedia.org/wiki/Annapolis_Group]Annapolis”>Annapolis Group - Wikipedia)</p>

<p>Part of not addressing it again is that the idea originated at one of the most corrupt and hypocritical outfits ever created, namely the Education Conservancy led by the snake oil salesman extraordinaire named Lloyd Thacker. </p>

<p>Reducing the transparency is never a good idea.</p>

<p>Fractal, I am afraid you have never read an example of the PA survey.</p>

<p>For the record, I am not defending the merits of the rankings. I support the broad dissemination of the underlying data. Or even the subjective opinions as long as one can see where they originate from.</p>

<p>xiggi, I don’t understand why the fact that the “collaboration” was covered in 2007 (which I didn’t know) makes it less relevant to this discussion. I don’t agree or disagree that it might have come from a disreputable source. The fact is, these colleges have a vested interest in covering for each other–which is exactly what the PA reinforces, either implicitly or outright.</p>

<p>“The fact is, these colleges have a vested interest in covering for each other–which is exactly what the PA reinforces, either implicitly or outright.”</p>

<p>So, we can all agree that top private LACs are manipulating the system to compete better. Gotcha…</p>

<p>Going back to the original question…I think I determined a few years ago that the PA score was almost entirely predictable from “hard data” so the rankings could be almost exactly the same without the PA if the hard data were weighted a certain way.</p>

<p>To respond to bclintonk’s point:</p>

<p>

</p>

<p>Here is the full methodology breakdown: [Best</a> Colleges Ranking Criteria and Weights - US News and World Report](<a href=“http://www.usnews.com/education/best-colleges/articles/2013/09/09/best-colleges-ranking-criteria-and-weights]Best”>http://www.usnews.com/education/best-colleges/articles/2013/09/09/best-colleges-ranking-criteria-and-weights)</p>

<p>You can actually access the majority of data points through the college profiles, such as Caltech’s: [California</a> Institute of Technology | Rankings | Best College | US News](<a href=“http://colleges.usnews.rankingsandreviews.com/best-colleges/california-institute-of-technology-1131/rankings?int=c6b9e3]California”>http://colleges.usnews.rankingsandreviews.com/best-colleges/california-institute-of-technology-1131/rankings?int=c6b9e3)</p>

<p>For those who have full access to the data (through a paid subscription or a physical copy of USNews), it looks like you can get access to just about all the information you would need to calculate the adjusted rankings without academic undergraduate reputation. </p>

<p>In the specific case of Caltech’s graduation rate score: the methodology specifies the aggregate score breakdown as follows: 80% average graduation rate, 20% average freshman retention rate.</p>

<p>These data points are actually available for free in the basic profile, so the score would be: .8<em>92 + .2</em>98, or 73.6 + 19.6 = 93.2. </p>

<p>Now, to calculate the rankings without the Undergraduate Academic Reputation scores, which is worth 22.5%, you can simply adjust each category’s weighted %-age in the methodology from a total of 100 to a total of 77.5. So, each category would be weighted as follows: </p>

<p>Student Selectivity: 16.13%
Faculty Resources: 25.81%
Graduation/Retention Rates: 29.03%
Financial Resources: 12.9%
Alumni Giving: 6.45%
Graduation Rate/Performance: 9.68%</p>

<p>So, Caltech’s adjusted graduation rate score would be 27.06 (29.03% of 93.2). The rest of the categories can be calculated accordingly to find the aggregate adjusted score.</p>

<p>Once you have all the data points in an excel spreadsheet, these calculations can be done in a collective/expedited fashion. If anyone who has full access to the online data or even a physical copy of USNews can make these calculations, it would be much appreciated!</p>

<p>USNWR could replace the PA with faculty reward statistics as a measure of faculty quality. Until they do, I’ll support the PA as I believe it’s the only proxy for faculty quality in the USNWR rankings.</p>

<p>^ Is that to include non-teaching faculty?</p>

<p>

</p>

<p>Well if it’s that easy, then go ahead and make the calculations.</p>

<p>Look, I know what weights they assign to various categories, and I certainly know how to adjust those weights after dropping out some categories. And yes, some of the raw data are available–but not all of it. But mainly what I’m telling you is I’ve never seen how they go from raw data and weights assigned to various categories to the raw score on which they base the ultimate ranking. So Caltech ranks 25th in 6-year graduation rate with a rate of 92%, and 6-year graduation rate counts for X% of the ranking. Fine, but how many points does Caltech get for ranking 25th in that category? Or for having a graduation rate of 92%, which is about 5 points below the leader in that category (Harvard, at 97%). There are lots of ways that could be sliced. Are the points based on the raw data or on the ordinal ranking, and in either case, what’s the formula? Is it based on some fraction of the leader in that category? It’s easy enough to weight the points for a particular category after you know how the points in that category are awarded, but until you have that, it’s completely non-transparent. </p>

<p>Unless I’m missing something. If so, please enlighten.</p>

<p>

</p>

<p>The whole point is that I don’t have access to the full premium data…hence my request for someone who does have access to share. I’m also overseas in a region where I cannot obtain a physical copy. But, thank you for the supportive gesture ;-)</p>

<p>

</p>

<p>I believe you are missing something- the fact that, while we don’t know exactly how US News goes from each raw categorial score to the total aggregate score, it doesn’t matter. For the purposes of this experiment, as long as we are comparing apples to apples and have consistency in how we decide to translate the raw data scores to the aggregate weighted score, we should still be able to see the ultimate trends in the rankings.</p>

<p>Another way of looking at it is simply discerning between the categories where “higher raw data # = better” vs. not. In the methodology, every number follows this pattern, with the exceptions of acceptance rate (where you can just do 1-AR %-age), student/faculty ratio (1/Faculty per student), class size 50+ students (1-%-age). </p>

<p>We don’t need every bit of transparent detail to achieve the desired results.</p>

<p>USNWR is an echo chamber for certain. PA is just another version of congratulating themselves, like the Academy Awards. It is what it is.</p>

<p>Lost in ALL of this elitist nonsense is the fact that regardless of the category or ranking of any school, is the mission of EVERY school to provide quality higher education to their constituent students. </p>

<p>And I think overall, with very very few exceptions, they ALL succeed at that mission. </p>

<p>There are stellar schools in small markets, third tier, for example. With wonderful endearing faculty, some of whom have amazing credentials. </p>

<p>Its not like UMinnesota or UWisconsin (name a school) get all the best professors. They dont. </p>

<p>What I look for in school is a COMMITMENT TO TEACHING. Then a COMMITMENT to the whole person, not some pompous jerk in a corner office pontificating with statistical analysis that would bury a bureaucrat at the Dept of Labor for a decade. Does the faculty reach out to students? Are they despised or beloved? Class size? Do students clamor to sign up, not because of an easy A, but because the instruction level is superior and the kids who complete the course are brimming with energy and information…a real learning experience? </p>

<p>Is the campus vibe/community a cutthroat/hyper competitive/snarky environment? Or challenging yet endearing and inclusive? Is the workload sufficiently challenging to cause students to be serious about their work? Or is it simply a place to live while they socialize and go to athletic events and take easy classes to graduate (I wont mention any names but a so called state flagship which is a top 30 school fits that definition to a T). </p>

<p>Research is for graduate students and professors. Rankings are really about undergraduate reputation.</p>

<p>Some responses to comments above (don’t have time to quote everyone):</p>

<ul>
<li><p>Of course there are some differences in how schools report the “objective” information, but at least they are starting with a common criteria. That’s what I meant by “consistently applied.” With survey respondents you have no idea whatsover what criteria each survey respondent might consider important. Plus, you have no idea how each respondent will allocate the ratings. One may consider only the top 4 schools to be “5” worthy while another may give 5’s for the top 50.</p></li>
<li><p>Giving grades to schools is done by many publications. It is more fair than a ranking but also boring and not very profitable.</p></li>
<li><p>Any survey, of either of PAs or GCs, is surely subject to regional biases, just like Heisman trophy voting. Beauty is in the eye of the beholder. This type of bias was shown in the one survey that I think was the most accurate of its kind ever done. It was a 2003 gallup poll measuring university brand recognition in the general public. In that survey, for example, only 5% of respondents in the East had Stanford as one of the top 2 schools, while 19% of those in the West had Stanford that high. See [Harvard</a> Number One University in Eyes of Public](<a href=“Harvard Number One University in Eyes of Public”>Harvard Number One University in Eyes of Public)</p></li>
<li><p>There are a lot of ties in the USN&R rankings due to heavy rounding. That itself has a big impact. It’d be interesting to see where the PA rankings would shake out if they added a few more digits.</p></li>
</ul>