<p>This thread is pure comedy. </p>
<p>First, you have someone like West Sidee who attempts to show a bit of erudition by posting this gem:</p>
<p>Problems with US News</p>
<ol>
<li>Different method of counting SAT for public and private schools.</li>
<li>Faulty method of overstating endowment for privates. Doesn't count all of public school endowment especially items like patent revenue, etc...</li>
<li>Use of statistics that has zero correlation with student education such as "yield rate" which studies have shown can be managed. </li>
<li>Ignores vital component of Baynesian econometrics, KISS (Keep it Simple </li>
</ol>
<p>Well, most everyone on CC knows that USNEWS dropped yield as an element of comparison a few years ago. But that is OK, his point number 3 is not worse than the other pseudo-scientific babble he posted in this thread. So, West Sidee, spare us the histrionics and do some research on the issues. </p>
<p>Then, we have some who cling to the apparent validity of the peer assessment ranking. Yes, that is the famous first column of the rankings that is also known as the great equalizer. Whenever one of the 'favorite" schools -just the advertisements and articles in the report to find them- slips a bit too much in the ranking, the "statisticians" shake their magic wand and use this category to maintain the order, with just a bit of organized chaos to keep the summer readers waiting with bated breath. The peer assessment is a huge testament to blatant manipulation as the surveys are a simple exercise in geographical and historical cronyism. They could as well rank the colleges by the year of creation and it would change very little to that part of the rankings. This portion of the survey has been exposed by past participants who have since refused to participate in protest, as well as by the former lead statistician of US News. Further, if that was not enough, USNews has also added some dubious categories such as expected graduation rates. This catehory has no other purpose than to mitigate the impact of a strong selectivity ratio. </p>
<ol>
<li>Speaking about selectivity, this goes to the few luminaries who see ANY validity in the Princeton Review rankings. To add some credibility to their asinine surveys that fail every integrity test possible -since someone can vote several times- they add a selectivity index to the subjective quality indexes. Let's take a look at the selectivity indexes for a few schools. As expected Princeton earns a truly respectable 99. Next, to evaluate the University of Chicago, Princeton Review will use a refine scientific model to correctly weigh the higher acceptance numbers, and some other mumbo-jumbo of their own creation. The result: a stellar 98. So far, so good ... until you look at that bastion of selectivity, that juggernaut of academic excellence, that shining star of the UC system that is none else than UC-Davis. Yes, UC-Davis scores a 99 in selectivity. Now, I am truly impressed by the integrity of PR. No doubt that one can find enough ammunition to develop a ranking that propels a school that accepts 80% of its applicants in a decision round to the top of the undergraduate world. Well, of course, 80% acceptance is still worth a 97 in selectivity according to PR. Nuff said!<br></li>
</ol>
<p>The published rankings are a joke, but the information that matters is at your fingertips. Build a ranking using the elements that are measurable and verifiable, and toss away the products of a bunch of misguided souls who are blinded by ulterior motives.</p>