Forbes or U.S.News?

<p>

</p>

<p>That sounds like a good approach. College rankings are, in effect, pre-computed results for queries against just such factors (grades and scores, as well as others you might have overlooked, like average faculty salaries or class sizes). So, in my opinion, they can be useful discovery tools (especially if you have no idea where to begin). That’s not to say they are the best decision-making tools. </p>

<p>To me, the most interesting part of the USNWR ranking (and some others) is in the bottom half of the top 50, or the next 50. That’s where students can find good possibilities if they don’t quite have the stats to qualify for the most famous and selective schools. </p>

<p>Should the difference between #1 and #5 matter to you? No. Should the difference between #25 and #125 matter? Yes, probably. If you can qualify for #25 but for some reason like #125 better, then you almost certainly can find an equally good fit somewhere in between. A significantly higher rank will tend to signal strengths such as smaller classes, better aid, or a higher-scoring student body (assuming you care about any of those things).</p>

<p>LOL, anyone who uses something other than US News is someone who doesn’t like how their school is ranked by said ranking.</p>

<p>^ Not necessarily.
Kiplinger has much better financial comparisons.<br>
Washington Monthly’s site allows you to click-sort on “Research” stats, if that’s a priority. If you want state-by-state comparisons, go to stateuniversity.com. If all you want is a quick snapshot of the supposed “best”, in no particular order but with lots of supporting data, go to 50topcolleges.com.</p>

<p>“Five years after you graduate, nobody will care whether you went to Harvard or Northern Michigan.”</p>

<p>Yeah, sure. Maybe if you had said Northern Mich versus Western Michigan it might make some sense.</p>

<p>I could have said UW Madison or UW Whitewater and it would have been just as valid.</p>

<p>Yes, you could have. Either way your claims are unsupported and barely uninformed opinion.</p>

<p>Presitge and USNWR are not the same. Emory, Rice and Vanderbilt have been ranked higher than Georgetown for years, but they are not more prestigious. WUSTL has been ranked higher than Brown and Cornell, but again, it is not more prestigious. </p>

<p>As far as rankings go, Forbes is useless while the USNWR ranking is interesting, but executed with complete incompetence, by design to hurt public universities and lift private universities.</p>

<p>The best way of determining prestige is by looking at the size of the university combined with the strength of its major programs (Business, Economics, Engineering, Law, Medicine etc…). Schools with over 10,000 students (including graduate students) and with top professional programs (Harvard, Stanford, Yale, Columbia, Chicago, Duke, Penn, Cornell, Cal, Michigan, Johns Hopkins, Northwestern,…) will generally be considered the most prestigious. Princeton does not have size or strong professional programs other than Engineering, but it is extremely prestigious thanks to its history, strength in the traditional disciplines etc… There are exceptions of course, such as Brown and Dartmouth, which benefit a great deal from their affiliation to the Ivy League, Notre Dame thanks to its Catholic identity and football tradition, Georgetown thanks to its history and location etc…Rankings will not determine the order of prestige. Princeton can be ranked as high as Harvard for eternity, but it will not be as prestigious. Columbia and Penn can be ranked higher than MIT and Stanford, it will not make them more prestigious etc…</p>

<p>

</p>

<p>75% or more of the USNWR ranking is based on rather straightforward statistical data (admit rates, test scores, class sizes, etc.) That data comes from the colleges themselves. What is the evidence that US News is handling it with complete incompetence?</p>

<p>The balance comes from the peer and guidance counsellor ratings. These presumably are vulnerable to the same problems that affect many opinion polls (sampling bias, bandwagon/halo effects, etc.) What is the evidence that US News is completely incompetent in addressing these problems? I’ve read about one or two participants shamelessly boosting their own schools. Whether this is a systemic problem that invalidates 22.5% of the scores, I don’t know.</p>

<p>If it is, a simple remedy would be to remove the peer and guidance rankings. The biggest effect of that would be to hurt public universities and lift private schools even more. That’s assuming no change in the set of objective metrics. Maybe an argument can be made for adding metrics that tend to favor public universities, such as research output … but I think that would be more valuable to graduate students than to most undergrads.</p>

<p>The fact that public universities are shut out of the top 20 isn’t prima facie evidence that the ranking is incompetent. What it does suggest, maybe, is that a 10 or 20 position spread doesn’t mean that much of a quality difference.</p>

<p>As skeptical as I am about the value of USNWR rankings, indeed any rankings, to say they are done “by design” to hurt a specific class of colleges is preposterous. Unless, of course, you have evidence of the motivations of the ranking authors? </p>

<p>“By design” they were originally to sell copies of a third-rate newsmagazine, and now “by design” to generate traffic and revenue for a website. </p>

<p>Anyone who places any credence whatsoever in the opinion of a magazine editor in the relative importance of various metrics to rate or rank colleges is very foolish indeed.</p>

<p>tk21769 #28:</p>

<p>

</p>

<p>USN’s statistical data isn’t more objective than its peer assessment. And USN taking the former at face value from the colleges and universities doesn’t absolve it of passing on garbage. </p>

<p>There is no standard of reporting within a u or c’s CDS if it reports one. If it doesn’t like an element of a CDS, the c or u will leave it blank. The manipulaton-of-data factor is at its peak. </p>

<p>For instance, all c’s and u’s fudge the % of t-10% (1st decile) graduates of high school within their admitted classes. It’s become a question of how much they will feel free to exaggerate.</p>

<p>There’s way too large a standard of reporting scores within the CDS. Some superscore and some don’t. Some, like UCLA, will report all SAT and ACT medians at the 75th and 25th, the mix between the two at the U being ~ 125-135% … to downplay its scores, and some will undoubtedly report the best forwarded between the two tests (along with superscoring when applicable) to ascend their two medians of the two tests as best possible.</p>

<p>Because of the above, there is a willful misreporting of the selectivity ranking by USN. “Willful” might be a bit dubious, but if USN reports things like selectivity without a review of its subcomponents that it culls from the various CDS’s (the variation probably much like their being written in different languages) – and USN really has no power obviously to review these stats internally, then it’s a willful participant in forwarding very bad info.</p>

<p>Essentially, the colleges and unversities have become expert in manipulating statistical data in admissions, etc, and this component has become just as “unreliable” as peer assessment. USN is abetting them in passing this info on and by forwarding its publication as if it were the ultimate standard.</p>

<p>At least some of the other publications actually try to see what the c’s and u’s produce in their graduates a truer “metric.”</p>

<p>I would add that those universities that manipulate data to maximize rankings are predominantly private. Public universities do not have nearly as much room to manoeuver with data reporting. </p>

<p>Drax has mentioned some of the loopholes used by several universities to manipulate data in an attempt to improve their rankings. Below are other ways that universities use to “game” the rankings:</p>

<ol>
<li><p>Addition of hundreds of meaningless seminars to increase the percentage of classes with fewer than 20 students. Those seminars truly add no value to the undergraduate education and are, if anything, a big waste of time and money.</p></li>
<li><p>Breaking down large lectures into smaller lectures taught by the same professor. This reduces the percentage of classes with over 50 students but actually creates more work for the professor teaching those students.</p></li>
<li><p>Begging alums for tiny donations and breaking those donations into several years in order to increase the percentage of alums who donate annually. In other words, an alum donates $5 in 2011, that alum will be registered as having donated $1 annually from 2011-2015.</p></li>
<li><p>Removing graduate students from faculty to student ratio calculations.</p></li>
</ol>

<p>There are many other tricks that some univfersities resort to in order to boost their rankings. </p>

<p>And don’t get me started on Financial Resources and Alumni donation ranks. The USNWR ultimately favors universities that cost more and waste more!</p>

<p>There are many other loopholes that universities use (abuse) to help them in the rankings.</p>

<p>Bottom line, as long as the data is not properly audited for consistancy and accuracy, those rankings are meaningless.</p>

<p>

</snip></p>

<p>Can you cite specific examples (school, course name, etc.) for any of your general assertions?</p>

<p>

</p>

<p>Sure it is. A so-called peer assessment is an opinion poll. An opinion poll reports opinions. Opinions, by definition, are subjective. Average SAT scores, faculty salaries, or class sizes are not subjective. They might be incorrectly calculated or incompletely reported. Nevertheless, by nature they are objective.</p>

<p>What you seem to be suggesting in post #30 is that the data is cooked.<br>
You seem to be suggesting that many schools engage in deliberate, wholesale efforts to misrepresent their data. I find this idea rather far-fetched. For this to be true, it would take more than a few over-zealous administrators taking liberties with vague CDS instructions. Entire academic communities would have to be complicit by looking the other way and not asking pesky questions about sudden spikes in the objective data. I don’t think it is too polyannaish of me to say, academic communities aren’t like that. Whatever other flaws they might have, people in academic communities do tend to care about data integrity.</p>

<p>

</p>

<p>A bit dubious? Yes. It’s one thing to say that the CDS data could could be manipulated in the ways you describe. Where is the evidence that this is actually happening?</p>

<p>

Yet subjective weightings are applied to the objective measures. Therefore, it’s all subjective.</p>

<p>^ The weightings do appear to be rather arbitrary, especially if you focus on small differences such as 15% for selectivity v. 20% for faculty resources. On the other hand, consider the fact that faculty resources (at 20%) is weighted much more heavily than alumni giving (at 5%). This seems to me like a principled, well-motivated spread. Do they get the numbers (not the relative spreads) precisely right? Shucks, I don’t know.</p>

<p>In any case, the arbitrariness would matter only if a plausibly different set of weightings (like flipping the selectivity/faculty weights) produces dramatically different results.</p>

<p>By and large, I believe they don’t.
At least for the top schools, much of the objective data seems to be mutually corroborating. It doesn’t matter much whether you emphasize average class sizes, faculty compensation & awards, selectivity, library holdings, or the age of the bricks in the buildings. Harvard is still Harvard.</p>

<p>

</li>
</ol>

<p>You know, most selective private universities, and all LACs, tend to be rather small. I really don’t think they have the wherewithal to be adding hundreds of meaningless seminars just for the sake of jiggering 6% of their USNWR scores.</p>

<p>tk21769 #33, I’m not a real fan of selective quoting. Next time, if you would, please quote entire paragraphs of mine so as not to take specific snipets out of context. Thanks. </p>

<p>It’s terribly naive to believe that the CDS invokes some sort of high standard of reporting. It doesn’t; it can’t. </p>

<p>I believe xiggi has posted articles and such stating that the top-10% % rank has been wildly misreported by u’s and c’s. There are undoubtedly some that misreport this statistic by 30% or greater. </p>

<p>Obviously, you know about superscoring. I told you about UCLA’s inclusive nature of all scores, regardless of best-forwarded. There are several c’s and u’s that bypass reporting scores for certain sets of admitted students. </p>

<p>All the above amounts to too-wide a standard for reporting scores at the 25th and 75th percentile. Obviously, those that actively try to game the rankings will report their scores at an artifically high level at least wrt to all those that don’t, so the comparison of scores can be an entirely apples-versus-oranges one at the least.</p>

<p>And I agree with Alexandre, that private u’s and c’s are the ones most often trying to game the rankings. Public u’s and c’s ideals most often run counter to gaming the rankings and trying to improve their positions within the USN’s publication, with things first as foremost as diversity index.</p>

<p>UCLA, for instance, wants to cast a broad net for a potential applicant pool to include those from poorer economic background who may not manifest high scores. So it willfully reports down scores to encourage those students to apply. </p>

<p>USN would have to know all this, the wide standard and even willful misreporting of these things by certain c’s and u’s, yet it purports itself as some kind of ultimate standard of college rankings.</p>

<p>Forget accuracy; that would imply objectivity. The weightings are subjective, chosen by magazine editors, of all people (their job is to make money). Static one-size-fits-none rankings are obsolete; use on-line search engines where an applicant can enter what is important to that individual.</p>

<p>@drax12 #37

</p>

<p>I quoted words of a paragraph I thought were consistent with the rest. I did not willfully distort the meaning of the whole by selecting only an un-representative part.</p>

<p>

</p>

<p>“High school class standing in top 10%” counts for all of 6% of the USNWR ranking (40% of 15%). </p>

<p>Now, some public universities report what seem to me like astonishingly high GPA averages for their entering students (Example: 3.96 by UCSD, [UCSD</a> Facts & Campus Profile](<a href=“http://www.ucsd.edu/explore/about/facts.html]UCSD”>http://www.ucsd.edu/explore/about/facts.html)). I’m inclined to attribute this to rampant high school grade inflation, absent clear evidence of fraud by the reporting universities.</p>

<p>No doubt, there are problems with the CDS. Some of the instructions and reporting standards surely need to be improved. I would like to know for example that SAT median ranges have a consistent meaning from school to school.</p>