Forbes ranking put a Liberal Arts College #1 second year in a row

@SeattleTW : Remember that the “customers” at each school are different, even if the student bodies are high achieving. As in, some student bodies may demand more than others in the way of quality or rigor (and I think Pomona is on the high end of this) whereas at others it may be more easy to satisfy the customer by providing something solid yet not rigorous enough to distract them from whatever non-academic activities. For example, that rating can suffer if students genuinely think their instructors suck OR if they think that they are being not challenged enough vs. what they want or being challenged at higher levels than expected or desired. I bet the latter is more frequent than the former even among high achieving student bodies. In fact, they are more likely to be upset with that scenario because they had been told that they are so awesome for so long. Struggling will likely lead them to lose confidence or blame the instructors for not teaching well. Believe it or not, many college students, like high school students even in rigorous fields like STEM still believe that exams should be easy to make 90+ on even after freshman year. They believe that if no one makes 100, the test is unfair. That would be true if a professor is only aiming for proficiency (which may mean just decent surface level knowledge of things), but lesser so if they are aiming for competency or skills in whatever they are assessing.

Interesting perspective because I’ve never met a high achieving kid who complained his professors weren’t demanding enough…OTOH, I’ve met many who were more concerned with teaching ability and communication skills, as opposed to class rigor, which was a completely different issue.

@SeattleTW : That would be my conventional wisdom, but I think no matter the level of the students…we like to be entertained and love good communication. However, sometimes good communication could be “spoonfeeding” as in the instructor makes sure everything is perfectly organized, provides slides to fill in notes, goes by the book, and tells what and how things will be tested. They basically teach and communicate in a way that makes everything “surprise free”. Also, sometimes more “engaged” teaching methods catch a lot of slack from students because we often view it as “teaching ourselves” because we are used to sitting there, being talked at, taking notes, and then going home. As in, we may value an instructor who sounds great when talking to us or has perfectly planned lectures than one who does convey the material well but maybe meanders and combines topics during lectures (I had maybe 2-3 instructors like this who taught well but wasn’t everyone’s cup of tea because for one, they were hard, and 2) they didn’t teach topics linearly. They would often talk about one topic and connect it to another concept that seemed only distantly related at the time. They also tended to cover lots of things not in the assigned readings or textbook. They covered topics with more nuanced and stressed many exceptions to “rules”. Students don’t like this as much). Getting students to do activities in class or even using Socratic method could annoy many students. When I was a UG, this seemed more an issue in STEM than social sciences and humanities where students were more used to being called on or discussing their reasoning publicly. As for “rigor”, it seems students at different schools have different tolerances for it depending on the institutional and social culture at the school along with a myriad of other things. Like an instructor and course materials considered challenging at one selective school may be more in line with a medium or easier section at another but the way students at both talk about it, you wouldn’t know that there is a stark difference. Furthermore, at some schools, higher than normal levels of challenge maybe more well received than at others. As in, if something is taught solidly or at least decently, it may be okay for an instructor at one school to give very challenging exams and assignments, but at another school, it may not be okay. Usually this may show up on RMP as an instructor with a low difficulty number (as in hard-let us say below 2.5) but still very high quality ratings.

Also, locally, on a per course basis, multi-section classes are interesting. I remember that students’ reception of intensity was largely based upon what they saw from other sections. If one or 2 instructors were superlatively challenging and then there were 3 others that were not, then students will feel as if they are getting screwed. Even when there are only 2 sections, if there was a noticeable difference that is discovered, students in the more challenging one will tend to complain even if they are being taught well. One case was a physiology instructor who used the case method and emphasized research in physiology vs. the other section that only did traditional book and slide learning and gave mostly MC exams. The case based instructor was good, but students raised hell when they had lower exam averages than the other section. It is best, especially for more advanced courses, to maybe only have one instructor/semester. And for intro. courses, either this, or some sort of grade normalization promise can soften the potential backlash toward an instructor more challenging than their peers. Either that, or ensure all sections are taught by “heavy-hitters”. It’s all about context. I think departments more aware of these things tend to have more uniformity than normal. It’s interesting to observe how we (I still consider myself a student) receive instruction.

Frankly, I find the methodology hilarious.

RateMyProfessor (10%)
Freshman Retention Rates (15%)
Payscale (10%)
America’s Leaders List (22.5%)
federal student loan debt load (12.5%)
student loan default rates (12.5%)
4 year graduation rate (7.5%)
fellowships like the Rhodes, the National Science Foundation and the Fulbright (7.5%)

go on to earn a Ph.D. (2.5%)

The above is a very slightly simplified version of their ranks.

First, ratemyprofessor is a student review driven tool, and has plenty of problems there. One being that professors not currently teaching at a school will be counted in, for example.

Freshman retention rates I agree with.

Payscale has very little data and is self reported. Again, not a good measure.

The idea of judging a school nearly a quarter based on their famous or influential graduates seems a bit silly to me. Not everyone will want to be a huge player in the world, but would like the quietly go about their own thing. Life prestige and influence is not a goal of all, if not many, of the educated. Also, those graduates represent classes of ten to fifteen years ago at least.

Judging a school purely based on debt taken on again seems a bit silly, and more reflective of a school’s price and ability to meet need. I do like the default rate part, and I think they could use both of those numbers to determine a better representation of ability to pay off debt. So, I will agree with the 12.5% part.

4 year graduation rate, as they say themselves, penalizes schools that are not in the traditional 4 year pattern.

The amount of fellowships is in the same vein of the leader's list, but for academics. Again, a bit silly. The same goes for Ph.D's. Many people these days have no intention of going on to get a Ph.D and it has no bearing on how good of a school they attended.

So, in summary, I agree with about 27.5% of their methodology. That 70% gap is huge.

There’s some good ideas here, but the execution is riddled with errors. I also do like that student scores upon entry are not considered. It’s a cool outlook.

Pay Scale and Ratemyprofessors, in concept, are great ideas, but you need better standardized data to use them for an accurate ranking. In fact, if you could get accurate data for them, I would weight them even higher, probably at least 25% each. Add that to the 27.5% I agreed with, and fill the remaining 22.5% with peer review and maybe hiring manager reviews, and I think you would have an interesting set of rankings.

US News is flawed as well for sure, but Forbes is a joke.

As others have said, nothing against Pomona, who very well may deserve their spot.

Forbes’ rankings are always absurd. Always. Their methodology is iffy at best.

I don’t know why there are rankings, people never like them and there is much discussion about faulty methodology. Even if there was one uber ranking that took the top 5 ranking systems and distilled them into one, people would still find fault with it. It would make for an interesting topic though. :slight_smile:

Most rankings are “faulty” and result in faulty assumptions by those who like or dislike them. I think that Forbes just may be trying to produce rankings that emphasize different values. I’ll get over it. Not to say I am that displeased because many of the schools ranked unexpectedly high do indeed perform really well in important areas that something like USNWR and perhaps those who “like” it so much wouldn’t give credit for. I suppose no one can shake what is now the conventional wisdom, USNWR.

17 of the Forbes top 20 research universities are also USNWR top 20 research universities. There are 3 substitutions (Georgetown, UVa, and Tufts for JHU, WUSTL, and Vanderbilt). There’s some shuffling within the T20 ranks, but that’s to be expected any time you change the criteria.

Forbes Rank … School … (USNWR Rank)
1 Stanford (4)
2 Princeton (1)
3 Yale (3)
4 Harvard (2)
5 Brown (16)
6 MIT (7)
7 Penn (8)
8 UND (16)
9 Dartmouth (11)
10 Columbia (4)
11 NU (13)
12 UChicago (4)
13 Duke (8)
14 Georgetown (21)
15 Tufts (27)
16 Cornell (15)
17 Rice (19)
18 Caltech (10)
19 Berkeley (20)
20 UVa (23)

62 JHU (12)
63 WUSTL (14)
47 Vanderbilt (16)

The biggest difference is the inclusion of LACs and research universities in a single list.
I don’t see why it should be any surprise to see LACs like Pomona, Williams, Swarthmore, and Amherst ranked among the top 20. If you applied the USNWR admission selectivity standards (which correlate strongly to the overall USNWR rankings) these 4 LACs would be among the USNWR ~T25. If we added some of the other USNWR criteria (such as faculty resources, graduation and retention rates, or financial resources per student), I would expect them to be well within the USNWR T20.

I’m not crazy about some of Forbes data sources (RateMyProfessor, Payscale). That hardly makes their approach “absurd” or “preposterous”. If you’re going to rank colleges at all, it’s reasonable to try to rank them by student satisfaction and outcomes.

Compared with USNWR, Forbes’ data collection is a low cost operation, hence its ranking is very dubious.

@hzhao2004, huh? The Forbes ranking actually does its own research (such as looking at the boards of all F500 companies, the rosters of the Big5 orchestras, various award winners, a bunch of politicians + more then tying them to their undergrad alma mater to come up with its Greatest Americans List, then weighing by school size at various times.

USNews just asks schools to fill out a form (which schools can game or lie about).

Forbes has an admirable philosophy, in that they are trying to quantify results rather than inputs. USNWR and many other systems are more heavily weighted towards inputs.

The problem is that the outcome metrics available are just not rigorous.

Some CC posters are more creative and relevant regarding outcome metrics than Forbes. @exegesis posted a thread, “Mean LSAT Score by Undergraduate college,” that Forbes should take a look at.

Yes, but how much do LSAT scores correlate with SAT/ACT? The question is, does the group with certain scores show any gains vs. what their “expected output” was. That metric may just tell us who has the best test takers again. I would pay more attention to the anomalies in such a ranking, as in I would pay attention to the ones that surprise…ones with lower ranges with good LSAT outputs.

As for Forbes, I would have otherwise expected research U’s to have a pecking order maybe based upon when they became “research universities” mainly joining AAU. Many places have a major headstart. For many schools, that’s when their influence started to increase by quite a bit especially among non-Ivies. It was interesting when I found the dubious “most powerful alumni networks” ranking. While I kind of dismissed it because it was using Linkedin profiles to measure % in management positions, I thought about it. It is an extremely crude measure but kind of measures more recent success (because Linkedin will bias their count toward “younger” alumni who feel that they can still benefit from making a profile at this stage in their career. A much older alumni may not do so as much-they likely used this metric because it is becoming one of those “gotta have its”). While it can’t do it accurately it kind of does hint at how far some schools (especially what are now the research U’s) have come and may reflect on whether or not they are doing decently at this stage in something beyond admissions metrics.

^ By all means, performance against expectation should be an important factor if the LSAT metric were to be used. I think the concept actually came up on the referenced thread.