Top 50 National Universities, ranked

TomSrOfBoston, the Peer Assessment rating may be “subjective”, but it is also far more difficult to manipulate.

@NickFlynn, When you do your LAC list you may want to add a column for % in the top 10% of HS class. Like SATs it’s a pretty good indicator of applicant quality, and information that’s easily found in the common data set.

Joke of a list !! Too much weightage given to retention and graduation rates. Can’t recall the last time I saw UCB ranked this low, and Brandeis over UCLA … ooofff !!!

Maybe UCB is overhyped? And I’d assume Brandeis would be higher than UCLA since it is a private institution with a comparable student body.

@Alexandre brings up a very good point. Every single criteria used here (and many used by USN) can be manipulated or flatout lied about by schools. That’s why I prefer the tiers by alumni-achievement that I came up with (which can not be directly manipulated by schools): http://talk.collegeconfidential.com/college-search-selection/1682986-ivy-equivalents.html

I may come up with a fuller ranking with weights etc. some day as well.

Also prefer Forbes over USN for that reason.

Ok, here is one thing that jumped out at me when I went over the differences between my rankings and the USNWR rankings…

27 93.9 University of California-Berkeley 94.2% 96% 91% 20 -7
31 92.5 University of California-Los Angeles 90.9% 96% 92% 23 -8
44 89.3 University of California-San Diego 88.6% 94% 86% 37 -7
52 87.5 University of California-Santa Barbara 86.0% 92% 86% 40 -12
56 86.4 University of California-Davis 84.3% 93% 84% 38 -18
70 83.7 University of California-Irvine 77.5% 94% 86% 42 -28
(My ranking, My rating, College, Test scores, Retention, Graduation, USNWR, Difference in ranking)

That’s 3 of the 5 double digit ranking differences in my top 75 in that group, and all these UC schools are substantially higher in the USNWR rankings. Any idea what is going on here?

They’re big publics taking in lots of poor kids and lots of kids with lower test scores because they have a mandate to serve all segments of their state’s population.

Agreed, why wouldn’t Brandeis with its higher SAT scores be presumptively more highly ranked than UCLA?

A bunch of high scoring SATers with a crappy faculty is still a crappy school.

^ But you can’t apply that to an assessment without a complex evaluation.

UCs also tend to weight standardized test scores lower relative to high school record (courses/grades/GPA) than many other schools, so a selectivity ranking focused on standardized test scores may list them as less selective than they might actually be.

I agree with Titan. Even though affirmative action is not used in admission but the UCs have a different mission. If you check out the news lately about the UC head, Janet P. has said she will try to match the ethnic make up that represents California population for the UC admission.

Given that the UC’s aren’t allowed to consider race in admissions, it will be interesting to see how she intends to implement this.

Some public universities get much higher peer assessment ranks than overall USNWR ranks.
http://www.usnews.com/education/blogs/college-rankings-blog/2013/02/28/which-universities-are-ranked-highest-by-college-officials
[U} School … Overall Rank … PA Rank
Berkeley 21 6
Michigan 29 13
Virginia 24 18
UCLA 24 19
UNC 30 19
GaTech 36 22
Wisconsin 41 22
Texas 46 22

A couple of alternate explanations come to mind:

  1. The USNWR “hard data” fails to capture certain important qualities in in which these schools excel
  2. The “peers” tend to overemphasize those same qualities

I suspect the difference has to do with how much importance the peers (professional academics) v. USNWR editors attach to research productivity. Some of the international rankings (ARWU etc) heavily weight faculty publication/citation volume; their results are more in line with the PA ranks for some of these state universities.

Personally, I’m more inclined to trust admission selectivity data than research production data as an indicator of undergraduate quality. I don’t think it would be easy for a college with many “crappy faculty” to long maintain high test scores averages and low admit rates. I suppose it could do so by manipulating its admission stats, but in order to make a significant, lasting difference in its US News rank, it would have to be a pretty big lie (one requiring the collusion or willful ignorance of many admission committee members and other stakeholders.) Then again, there are manipulations that don’t require outright lies (for instance, a test-optional policy that skews the school averages toward the higher-scoring students… or the Lake Wobegon effect that seems to be happening with some schools’ average entering GPAs.) Has anyone tried to measure how much impact these or other manipulations have on college rankings? The USNWR rankings have been fairly stable over time.

Actually, comparing the quality of education between two schools is not as simple as looking at admission selectivity. Having stronger incoming students allows a faculty to offer more rigorous courses and curricula, but does not require them to. (Note that this is not exactly the same as faculty quality, however that is defined, since it is possible for good faculty to offer a relatively weak curriculum.)

An example would be economics at Florida State University versus University of California - Santa Cruz. Based on incoming frosh stats, FSU has higher high school GPAs and test scores than UCSC. Yet the FSU economics major is non-calculus-based, while the UCSC economics major requires multivariable calculus (used in intermediate microeconomics). A student intending to go on to PhD study in economics would find UCSC to be a much better choice for an economics curriculum (although additional advanced math courses would be indicated regardless of school).

@tk21769, note that the faculty are asked in the survey to rank by undergraduate education quality. They’re not asked about the university as a whole or research.

I think that the discrepancy can be explained by inputs vs. quality of the education (and bigger schools being more well-known). The publics tend to be huge, so in general won’t be as selective as smaller privates. But someone who is smart, motivated, and hard-working may well get as good an education (or better) there compared to a private.

So I understand, but that would not necessarily preclude a preference for institutions with strong research output. Professional academics may tend to believe strong scholarship is a major indicator of high education quality, including at the undergraduate level. If not that, then what else are they seeing that the USNWR and Forbes “hard data” miss? What objective measurement(s), related to undergraduate academic quality, would indicate Berkeley is a stronger institution than every other university but HYPSM?

@tk21769, possibly, they believe that the quality of the faculty and/or the quality of the education (in terms of preparation for grad school) at Cal is better than at any other RU outside HYPSM.

As a professional academic, I do view UCB as a better university for undergraduate education for talented, driven, independent , and "coachable " students than most “top” universities/LACS, private or public. For such students, the possibility and opportunities (not guarantee) to interact and work with some of the best scholars of our generation is a HUGE plus. Honestly, such level of apprenticeship opportunity simply does not exist in most LACs or “lesser” research universities in most fields.

I can believe that that’s what they believe. Exhibit A: post 39.
If that belief is valid, then objective data should verify it.
For example, if Cal students get better preparation for grad school, the results should show up in the per capita alumni PhD numbers. They don’t. Berkeley ranks 40-something in per capita S&E PhD production. Not only do 6 of the 8 Ivies (plus MIT, Chicago, Stanford, JHU, Duke, etc) have higher numbers, so do less selective LACs such as Lawrence, Hendrix, Whitman, Allegheny and Earlham. I understand there may be confounding factors behind these numbers … but enough to drive Berkeley’s rank down to 43rd, if it’s truly one of the 10 best universities for grad school preparation? It is the #1 PhD producer in absolute numbers, so that may be evidence for what PCHope is suggesting about its best students.

I can see that, according to Washington Monthly numbers, Berkeley is in the top 5 for “Faculty
receiving significant awards” and “faculty in national academies”. I don’t think there is any question that Berkeley employs many top scholars. The question is, how much exposure do even above average (talented, driven, independent) students get to them? That’s not a rhetorical question – I really don’t know. Maybe they get as much or more than students do at top privates. If that is the case, why doesn’t Berkeley attract more OOS applications?

If Berkeley really does offer more opportunities and a higher quality education for even the top ~25% of its students, then I think we should allow for the possibility that the major college rankings are emphasizing the wrong factors (possibly including admission selectivity). They should be developing better data to track that instructional quality directly or through its outcomes. As it stands, the peer evaluation in USNWR is NOT almost entirely predictable by the hard data USNWR uses. So, it would appear that either the peer judgements or that data must be a little out of whack.