US News 2008 to 2009?

<p>Thanks, Hawkette for once again clearing up the misunderstandings about yield playing a part in the USNWR rankings. As you so clearly show: they don't.</p>

<p>Schools like WashU get dinged here on cc when accused of pandering to the rankings by waitlisting top students mercilessly in order, it's alleged, to protect their yield. Say what one will about WashU (a school I admire), their waitlist policy will have no affect on their ranking according to the formula Hawkette provided. </p>

<p>In fact, the formula is designed in part to resist manipulation by schools--as it is harder to coerce one's peers to suddenly favor a rival school (unless research $$, salaries, labs, breakthroughs, influx of field leaders, and other academic conditions show sudden signs of dramatic improvement that the academic community gets excited). Schools that spend a lot of money on faculty (about 7% of overall ranking) are also rewarded in the rankings, a condition (if schools are trying to improve their ranking) I think would only be a benefit to students.</p>

<p>But I sometimes wonder at all the fandom reactions. All of this ranking stuff is like following a favorite sports team. Some like to see sharp new favorites rise. Some are annoyed to see upstarts with lots of talent overtake the venerable legends.</p>

<p>First of all the USNWR methodology and the weightings are not static as they change with time. The magazine wouldn't sell as many issues if the rankings remained the same year to year so they change the methodology and/or weightings. </p>

<p>Second, does a one year change in a number of these factors really translate into a greater/lesser undergraduate experience? Of course not. </p>

<p>Third, is there really a significant difference in the undergraduate experience at the top 20 or 40 national universities in the US out of 3500+ schools in the US that the USNWR methodology can discern? Of course not. Much depends on the individual student and how they fit in an undergraduate program.</p>

<p>Fourth, does the peer assessment really tell us anything about the undergraduate experience? How would the peers know anything about the undergraduate experience aside from their evaluation of graduates who were admitted to grad school at their institutions? Sorry, but a good reputation in research and publications doesn't necessarily translate into a better undergraduate education.</p>

<p>Lastly:</p>

<p>
[quote]
5% Graduation Rate Performance (measures diff in 6-yr graduation rate and the predicted rate)

[/quote]
</p>

<p>This metric results in a score that reflects the school's prediction to the actual graduation rate. Why is the prediction vs actual graduation rate important? Shouldn't the six year graduation rate, used in the retention metric, be sufficient? What information does the predicted vs the actual graduation rate provide a prospective student? With this metric, schools who under estimated the graduation rate would be rewarded.</p>

<p>Unfortunately, schools take these rankings seriously. I was talking with a Columbia professor of applied mathematics and he noted that the engineering school had moved up this past year, cracking the top 20 in that field. I then asked if anything had changed in the undergraduate engineering program and he replied "no." He acknowledged that the school was focusing on the data and "marketing" that would move their ranking. It is no different than the marketing of a cola or an antiperspirant.</p>