<p><a href="http://www.insidehighered.com/news/2009/08/19/rankings%5B/url%5D">http://www.insidehighered.com/news/2009/08/19/rankings</a> </p>
<p>It also helps to fuel my skepticism of the UC school system...</p>
<p><a href="http://www.insidehighered.com/news/2009/08/19/rankings%5B/url%5D">http://www.insidehighered.com/news/2009/08/19/rankings</a> </p>
<p>It also helps to fuel my skepticism of the UC school system...</p>
<p>^^^Yeah and a lot of the so called objective numbers are jobbed too.</p>
<p>All statistics can be manipulated, but at least they are more or less massaged by an impartial audience. The college presidents clearly have a vested interest in these surveys.</p>
<p>Any list that has UC Davis, Santa Barbara, and Irvine ranked over UT, UF, PSU, UWash(flagship unis), Tulane, Miami, and GW is a joke. These UC schools are gaming the system on the self-reporting categories (notably top ten percent of class) and have over-inflated PA scores, all the while with sub-standard SAT/ACT achievement. Ridiculous.</p>
<p>
</p>
<p>I agree. Sure they have impressive class ranks, but that is not everything, particularly because more and more schools are not rankings their students. Class rank and GPA are not universally equal across the board. The lower UC schools as commuter schools with manipulated stats. Nothing wrong with that, but they are not “distinguished” by a long shot.</p>
<p>Just curious, what are the PA scores of those schools? I say the so called objective numbers used by USNWR are the main culprits in giving these, and other schools, such lofty rankings.</p>
<p>Please dont disrespect the great state of Pennsylvania.</p>
<p>“All statistics can be manipulated, but at least they are more or less massaged by an impartial audience.” </p>
<p>Actually Willmingtonwave, stats used by the USNWR are massaged by the universities themselves, so they are not impartial. If they were indeed impartial, the rankings would look very different.</p>
<p>“The college presidents clearly have a vested interest in these surveys.”</p>
<p>Perhaps in the case of some universities they rate, but not in the case of most universities.</p>
<p>
I more specifically meant that the statistics were assigned value by the USNWR. I do know that statistics are not truly standardized, with issues such as SAT superscores and whatnot.</p>
<p>
I guess also more so than their innate bias, how well do these administrators really know schools outside of their peer schools? </p>
<p>I’m just a skeptic. I do like that they are putting an added emphasis on undergraduate teaching quality, which is the most salient aspect of undergraduate education.</p>
<p>Administrators don’t really care about accuracy when they are filling out the survey…</p>
<p>Then again, it’s either a 4 (strong) or a 5 (distinguished). There is no 4.5 (slightly strong and distinguished). It’s one or the other. (How can you mess up?) Even a CC forum member can label Harvard (5) and Harvard President Drew Faust can name their Ivy counterparts a (4) or a (5)… </p>
<p>NRC is conducting in a similar fashion right? If conducted correctly with a large sample size, high response rate, and ppl who actually devoting the time needed to fill it out with care and precision in the first place, then the PA scores would be more valid in my opinion.</p>
<p>
Ah, but that’s precisely the disturbing thing about these released reports. </p>
<p>As I noted in another thread, what’s worrisome about the results coming out of Berkeley, Florida, Wisconsin, et al is that those are top schools. The administrators and faculty at those schools are far more likely to be familiar with their colleagues and practices at peer institutions. Quite frankly, I would expect an administrator at Wisconsin to be much more conversant with Michigan than someone from, say, Arkansas State – yet it was the former institution that ranked Michigan the lowest.</p>
<p>
Even if a university did work to improve the factors ranked by USNWR, it can only be to the good. Wouldn’t you agree that having higher-ranked students, students with higher test scores, lower faculty-student ratios, smaller classes, higher graduation and retention rates, etc. is a GOOD thing?</p>
<p>If you are implying that colleges are being downright dishonest – proof?</p>
<p>
Most of the universities discussed on CC are in the top 10% of all PhD-granting research universities (there are ~270). I’m surprised any of the top 30 universities would get below a 5.</p>
<p>The fact that not even Harvard or Stanford nets a 5 speaks volumes about the peer assessment reports, in my opinion. Who in their right mind would assign either school a 4? And yet, 10% of respondents must have! </p>
<p>Even more shocking, apparently 50% think Cornell only deserves a 4. This is a school indisputably in the top 10 at the graduate level, in the top 15-20 for undergrad, and has been ranked in the top 15 in the world by several rankings.</p>
<p>How can one be surprised that many people think PA scores are a joke?</p>
<p>“Even if a university did work to improve the factors ranked by USNWR, it can only be to the good. Wouldn’t you agree that having higher-ranked students, students with higher test scores, lower faculty-student ratios, smaller classes, higher graduation and retention rates, etc. is a GOOD thing?”</p>
<p>None of those, taken in an absolute sense, is telling. The USNWR does not work relatively, it works absolutely and it does not dig deep to understand the implications of statistical differences and gaps.</p>
<p>“If you are implying that colleges are being downright dishonest – proof?”</p>
<p>I am implying that and the proof is ample, but I am not going to point fingers.</p>
<p>You have to be pretty gullible to give PA any credibility. This just proves what should have been obvious to anyone who pays attention.</p>
<p>Then again, you have to be pretty gullible to give USNWR a lot of credibility too. I mean since they don’t assure any of us that the objective data they represent is any more legitimate than the PA scores.</p>
<p>
How can 30 schools be considered “distinguished” when a lot can separate those top 30 schools?..i.e. faculty achievement, top 5 academic programs, breadth and depth, etc.</p>
<p>Most schools in the Top 50 are strong…only a select few can be labeled “distinguished”. And when you have academics doing the rating, they’ll look for things that they hold in high regard that helps with distinction within their fields (i.e. Nobel prizes, research publications, national academy membership, etc.).</p>
<p>
Well, just because 90% of the survey takers think its distinguished, not everyone does…they may have their reasons (perhaps they don’t think they provide strong undergrad education). It’s OPINION…there is no right or wrong answer. But ~2,000 collective opinions can be telling.</p>
<p>
Again, I know I will never see eye to eye with you UCBChemEGrad :), but how exactly does research publications and Nobel Prizes and whatnot benefit me as an undergrad? These are supposed to be undergraduate rankings, so it would be logical that they are evaluated as UG institutions. </p>
<p>I guess the problem most of us have with Peer Assessment, is that is is the aspect (a large aspect) of the rankings methodology in which the flawed human element is so transparent. </p>
<p>
</p>
<p>Have you seen how many schools there are in the national universities rankings? The 1-5 scale is supposed to encompass the ENTIRE list of schools, not just the 50 that we all as elitist CC members deem “adequate.”</p>
<p>I agree wth diontech, Pennsylvania is awesome ;)</p>
<p>
Because it is faculty and their achievements that bring distinction to an academic program! Having a collection of high scoring SATers and a great lecturer is fine, but its not distinctive and not visible. Top academic programs have strong undergrads but it’s the faculty that distinguish a program.</p>
<p>Part of the issue with the survey is you’re asking academics to complete it…they’ll rate colleges through their eyes. Maybe you can do a survey of 1000s of undergrads and ask questions that are important to you. And then complain about the results on CC when they don’t coincide with your views and predjudices.</p>
<p>UCB is wrong again and here’s where his argument doesn’t hold water: If Nobel Prize winners and national academy members, et al are so important, why doesn’t UC Santa Barbara, Davis and Irvine attract a higher caliber student? i.e, Davis’s middle 50% SAT is 1050-1300. Heck, with type of faculty UCB trumpets, one would figure all sorts of top students from all over the country would be flocking to these universities. But the case is these schools are made up of less than stellar students from mediocre/sub-standard California public schools. Which is why their presence in any top 50 list is a joke and attendant PA assessments are ridiculous.</p>
<p>^ Harvardgator, here is where your argument doesn’t hold water: What does PA score have to do with undergrad student caliber? PA score is a proxy for academic program/faculty strength. There are other metrics measuring student caliber. There is nothing that says they have to correlate. That’s why USNWR combines both metrics into an overall ranking. </p>
<p>The UCs don’t attract as high caliber of students because there are numerous in-state options (public and private) for which to choose and get admitted to…in a sense the system cannabizes each other for resources (students and faculty)…each campus has its own personality/culture…there isn’t a one size fits all.</p>
<p>Perhaps distinguished faculty prefer the quality of life in Santa Barbara and Orange County over Gainesville and Happy Valley.</p>