It’s nigh on impossible for any one of us to be able to comment, with a straight face and any certainty, on the quality of more than a few schools. Why?
Because we can’t, and haven’t, attended them all. For all but the schools we personally attended, the info is second-hand and, as such, is open to modification by our own biases.
So this “overrated” and “underrated” - pardon me, krapp - is a breeding ground for natural bias, sour grapes, etc.
There are several reputable ranking agencies who take very seriously their task of ranking schools according to their own metrics. They tend to make it a point that they try very hard to avoid bias.
Certainly those well-known rankings are not exhaustive. Given all the variables, and weights we could place on them, the variety of rankings is nearly infinite. We can all have our own ranking criteria and as long as we apply it fairly, we can present our own quantified ranking system. That’s for undergrad, grad/pro schools, PhD, whatever.
I think in judging the quality of an undergraduate program, the following should be taken into account:
- accreditation
- % of lecturers with a terminal degree in the field
- % of lecturers with awards
- % of lecturers with publishing credits
- student satisfaction surveys regarding classes and instructors
- average class size
- number of majors available
- study abroad, internship, and research opportunities
- ease of entry into, and exit from, majors
- 4- and 6-year grad rates.
(though it should be noted that some schools are harder to finish than others… Reed, Swat, Chicago and Hopkins come to mind in the “difficulty” category. And I could see it as an advantage for grading/classes/curricula to be harder: it makes the student work harder and, thus, prepares him or her more for the real world. I rather think that a more difficult education is a better one. So those rates could be applied either way, or disregarded.)