<p>We all know that the US News rankings are surveys that are heavily manipulated by the colleges and weight subjective impressions (such as reputational factors)heavily. Also, the data provided by the schools are not publicly available to us and rarely referenced.</p>
<p>The Shanghai rankings and the THES, Newsweek rankings also have similar flaws. Now I have come across a highly respected (as in respected by the National Research Council) ranking that relies on publicly available and very objective data. This is by the Center at the Univ of Florida, Center for Measuring University Performance. Columbia ranks number one for 2006!</p>
<p>The nine characteristics used in these rankings rate research productivity. Some of you will object. Should a university be judged purely on its research? Maybe not. What about undergrad teaching? What about quality of life and other social factors that make it worthwhile to go to Univ X instead of University Y which may be higher ranked? These things are not easily measured. They exist and are important. But since the university's mission is research and since here the data are impeccably objective and verifiable I believe this ranking is a better tool.</p>
<p>Top researchers may not make good teachers. But the opposite is not true either. In the absence of reliable measures of teaching quality, etc why not go with schools that are tops in reseatch hoping that that atmosphere and commitment to intellectual excellence will trickly down to all the other areas that matter? I have an ax to grind. My son goes to Columbia this fall.</p>
<p>yes, columbia performs much better on rankings based on research instead of undergrad education. do you have a link to your referenced data?</p>
<p>Well this rankings isn't good, either, though. I love my college, but we're not number one. A realistic spot would be number 5, after HYPS. Sharing the spot with MIT would probably seem even more credible.</p>
<p>Princeton does not fare well in international rankings, for example in the Shanghai rankings. Nor does Yale. In the rest of the world, it is Harvard, MIT, then Stanford and Columbia as co equals. This by reputation.</p>
<p>
[quote]
since here the data are impeccably objective and verifiable I believe this ranking is a better tool.
[/quote]
</p>
<p>The underlying data is objective, but there's subjectivity in how to convert a whole bunch of objective raw data into a set of rankings.</p>
<p>columbia2002, human beings will never attain true objectivity. To the extent that humans are doing experiments and doing science, don't you think all science involves objective data converted for publication by humans? </p>
<p>The Center for Measuring University Performance, I believe, completely eliminates the numerous flaws of the various other rankings and focuses on what universities claim to be about, ie research. Faculty are attracted to universities for research opportunities, not for teaching. They are rewarded for research, for bringing in grants, for winning academic honors, etc. Teaching, in so far as it is important, is difficult to measure. The Nessie (national study of student engagement) data will show that but universities are loath to release that info.</p>
<p>And also, this ranking just shows that Columbia is tied with Harvard, MIT, Stanford, and UPenn. They have those five done in alphabetical order. I mean, Columbia is first, but so are the other four.</p>
<p>
[quote]
I believe, completely eliminates the numerous flaws of the various other rankings
[/quote]
</p>
<p>You believe wrong. Doing a college ranking isn't science/an experiment. There isn't an inherent or universal truth. A human has to make a subjective decision about how to weight the various metrics of research success, etc.</p>
<p>Peugeot, you are correct, I overlooked the alphabetical list. Columbia2002, to the extent that the Center chose those 9 metrics and did not count Nobels and Fields Medals, for example, there was subjectivity involved. My point: the data itself is not subjective, that is, it is not based on a poll. Nor was the data supplied by the schools and unverifiable and manipulable. It did not rely on yield rates and faculty/student ratios etc that are all manipulated. Is this a perfect rating system? Of course not. Is it better than USNews? I believe it is.</p>
<p>It's not close to objective. Someone could have taken the same metrics, weighted them differently, and came up with a totally different ranking.</p>
<p>Washington Monthly did a ranking in which Columbia wasn't even in the top 20<a href="http://www.washingtonmonthly.com/features/2006/0609.national.html">http://www.washingtonmonthly.com/features/2006/0609.national.html</a></p>
<p>Each ranking is subjective depending on the weights. Columbia2002 is right - there is no universal truth to these things.</p>
<p>Forgive me, perhaps I am not explaining myself clearly. The choice of what to measure and how to weigh the measures etc will always be subjective. Everything humans do will have an element of subjectivity. That is a given. What I am trying to get across is that this sorting (not ranking) eliminates subjective weighting of subjective data (peer assessment for example) and subjective weighting of unverfiable data (SAT interquartile bands as reported by colleges) and subjective weighting of manipulable data (yields) and is left with just the subjective weighting of objective data (example, dollar value of federal research grants).</p>
<p>At least the data gathered is objective even if the sorting and weighing of this data will always remain subjective. The Newsweek rankings and the Shanghai Jiao Tong U and the THES rankings all include subjective data.</p>
<p>I still feel that the Revealed Preference rankings are the closest thing you're ever going to get to objective. They rank based on one thing: given options between top schools, what schools do students with those options choose to attend? It's done using the same rating algorithm used for chess players, where in this case if Columbia and Penn admit the same student, and he chooses to go to Penn, it's a "win" for Penn vs Columbia and a "loss" for Columbia vs Penn.</p>
<p>So you can get a pretty damn objective measure of where students want to go, given the option. In some cases the rankings are close, in others there's less of a contest.</p>
<p>Every other ranking system i've seen isn't basing it on the opinions of the general population, they're trying to CONVINCE the general population of new rankings in order to influence those opinions, and doing so with a pseudoscientific set of objective and subjective data along with a pseudoscientific algorithm. Around here, I've only seen such systems used as bragging tools. In my experience, once a student sets foot on Columbia's campus, they no longer feel a need to brag to anyone about where they attend college. I wish that parents could do the same.</p>
<p>Denzera, with all respect, the revealed preference data shows not the preference of the general population but that of impressionable, inchoate 18 yr olds. How can this be objective? Little Johnnie and Suzie are influenced by the talk around the dinner table, echo parents' and peers' views, and rankings. This is not objective, but highly subjective in the manner of beauty pageants. The NSSE data may be of use, the Center at U of Florida has a highly respected sorting system. I believe LACs offer the best undergrad education. How can you beat a college where you sit around a Harkness table and chat with a prof? As to chess players, they are ranked by the higher ranked player they have beaten. In the revealed preferences studies, Harvard does not come down a notch because a student turned it down to go to Northwestern nor does NW go up because the student chose NW over Harvard.</p>
<p>objective according to whom? They're the very people that this ranking matters to! They're the ones making the decisions about their lives, they're the ones most assiduously reviewing and considering their options (at least I hope), and they're the ones colleges are competing over. </p>
<p>Your statement about chess players and Harvard/Northwestern reveals an ignorance about how that algorithm works. I'm not going to take the time to explain it here, but I suggest you read how ELO rankings are calculated.</p>
<p>Mr. Denzera, please consider that the population that is most concerned with the rankings, and the population over whom the universities fight may not be objective enough to judge the schools. Yes, I used that word objective but objectivity perhaps demands some distance, some remove from the passions of the day and parents and kids applying to these schools may lack that objectivity. In fact, it is perhaps more objective to look at all rankings over a period of two decades to get a sense. In that sense, whether Marilee Jones was good for MIT or not may well be judged 50 years from now if the current crop of MIT admits amount to much. Objectiivity demands perspective and time alone lends it. In the absence of time, measures like % of faculty in National Academy, research productivity, research dollars acquired may serve as an accelerated version of time, in the same sense some isolated regions offer an accelerated view of how evolution works.</p>
<p>I used to be, two decades ago, ranked 1400+ in the chess world, perhaps I am mistaken, haven't played in a while. I wouldn't call someone ignorant, I might gently say maybe I am mistaken but I have been thinking it works like this, etc and above all I would be too embarrassed to say I don't have the time to explain. I am sorry I imposed my ignorance on your valuable time.</p>
<p>well as long as i get to be #1 too i'll support it</p>
<p>ramaswami -</p>
<p>let me rephrase: have you read the explanations of the algorithms, data cleaning and manipulation that the revealed preference ranking study used?</p>
<p>I'll grant that prospective rankings (using acquired data) rather than retrospective rankings (using admission/matriculation "tournaments") have value. I never said they were completely worthless. But they can never be "objective" by your definition because the scope of the data collected, and the weightings and calculation of the final rankings, are inherently subjective elements. Argue if you will about the biases inherent in new college prefrosh, but I think using only their decisions is a much more objective approach because no human choices are being used to determine the output (even if human choices are the inputs).</p>
<p>Denzera, no I do not know about data cleaning and the algorithms involved in the revealed preferences matter nor am I math savvy to understand it well if I am presented with it. It seems to me that college prefrosh, in voting with their feet and tuition, will decide colleges on factors that may have little to do with the college's strengths in teaching, research, etc. There is wide agreement that undergrad teaching at Harvard is poor. Harry Lewis, the ex dean of Harvard college, Ross Douthat, etc have excoriated the college. Yet, Harvard trumps all others in revealed preferences. So, if one were to devise a ranking of colleges on teaching quality I would not look to where the prefrosh go. I would look for objective factors like whether faculty get rewarded for teaching, if kids get writing assignments throughout the term, how accessible the teachers are, do full profs teach etc etc. Of course, when I decide on these factors some subjectivity is involved but the data itself (for example class size, or no of full profs teaching freshmen classes) would be objective in that it would be measurable. To measure research strength, one can look to citations, National Acad memberships etc. One can also take a few undergrad teaching measures and a few research measures and come up with a ranking. Yes, it would be subjective in the ways I choose to weight and combine measures and the measures I choose but my thinking is this would be better than the biases of 18 yr olds. Please feel free to educate me out of my ignorance.</p>