Research and graduate program is important for PhD students, when sorted by planned field of study. It’s far less important for typical undergraduate students. Typical undergrads prioritize a lot of criteria besides research and graduate program, which is not captured by research + graduate rankings.
Research + graduate rankings are also not inherently any less arbitrary or more meaningful than USNWR. They are still choosing arbitrary and widely varying weightings to rank “best”, with no verification about whether their formula for “best college” is correct or not.
For example, the largest component of the Shanghai weighting is historical Nobel prize winners with 33% of weighting in some way involving historical Nobel prize winners. Chicago does very well in this metric, particularly in economics. In contrast many other rankings do not consider Nobel prizes at all, in many cases do not consider faculty accolades at all besides research citations. Some control for size. Some do not. This can create a very different order of "best’ and no validation of whose list is correct/incorrect.
One of the most direct metrics of social mobility is to look at the portion of students who increase significantly in SES. For example, consider the following 2 colleges. College 1 has the better graduation rate and the larger portion high income graduate, but it doesn’t have good social mobility. Instead it admits high income kids and graduates high income kids. In contrast college 2 has a lot of social mobility. It admits a large portion of low income kids, and a large portion of kids increase to higher SES level after college. So a large portion of kids increase in social mobility after college.
College 1: Bad Social Mobility
Input = 5% low income, 10% middle income, 85% high income
Output = 5% do not graduate,20% middle income, 75% high incomeCollege 2: Good Social Mobility
Input = 50% low income, 30% middle income, 20% high income
Output = 10% do not graduate, 30% middle income, 60% high income
This is what the Chetty study I linked to earlier tried to measure. It found the following colleges had the largest portion of students who moved up at least 2 income quintiles after college. HYP didn’t fair as well. Chetty put the #1 USNWR ranked Princeton in the bottom 2% of colleges in social mobility.
Best Overall Mobility Index: Chetty Study
1 . Vauhn College – 57% of students increased 2 income quintiles
2. CUNY – 51% of students increased 2 income quintiles
3. Baruch – 49% of students increased 2 income quintiles
4. Texas A&M – 48% of students increased 2 income quintiles
5. Cal State LA – 47% of students increased 2 income quintiles
…
1991/2139. Harvard – 11% of students increased 2 income quintiles
2040/2139. Yale – 10% of students increased 2 income quintiles
2096/2139 Princeton – 9% of students increased 2 income quintiles
The Chetty study above uses tax reported income of parents, which USNWR may have trouble getting access to. USNWR could ask some related questions in surveys and default to Chetty reported + penalty for colleges that don’t provide info. However, this is complicated and there are likely to be further issues. A far more simple metric that is trivial to calculate and uses existing numbers that USNWR already has in their database is simply (% Pell x Pell Grad Rate). Some colleges that do well in this metric in 2019-20 IPEDS are.
- Baruch – 43%
- Cal State LA – 40%
- CUNY – 39%
- Texas A&M – 37%
Some colleges that do poorly in this metric, as listed in 2019-20 IPEDS are:
- Duke – 11%
- Caltech – 11%
- Chicago – 12%
The above is better than existing, but still has issues. Some colleges try to increases Pell %, but still admit the bulk of class from top 5% income. Just looking at Pell % doesn’t distinguish between such colleges and other that admit bulk of class from middle income. Ideally, you’d want to consider portion of the class that is truly low income (not same as Pell), middle income, and high income… not just portion Pell. You could get some rough estimates based on things like portion that did not apply/qualify for FA as listed in CDS and sticker price without FA. USNWR has their own CDS like survey and could ask other questions, without getting too specific about income of FA applicants.
Graduation rate is also a severely flawed metric that primarily relates to being selective enough to admit kids who are likely to graduate and having good enough FA so that they don’t need to leave for financial reasons. Better would be to apply a correction for colleges that admit Pell kids that have a lower/higher risk of not graduating. For example, if SAT score and demographic distribution suggests that admitted Pell kids with those stats typically have a 40% chance of graduating, but 80% graduate, that’s impressive. However, if Pell kids with similar SAT score and demographics have a 90% chance of graduating and 90% of them graduate, that’s not as impressive.
The above calculation adjustments is kind of like treating a bullet wound with a band-aid. It’s better than nothing, but there are far more severe problems with the whole idea of try to rank “best colleges” by a formula. Even if USNWR were to change the “social mobility” metric to Chetty’s calculation, the USNWR list of best ranked colleges is still going to be applying arbitrary selected weightings and arbitrary selected metrics to create an unmeaningful formula for “best college”, with no validation whether the formula is correct/incorrect. Best of all would be to abandon this idea of pretending the outcome of arbitrary scientific looking formula for “best college” is meaningful and instead list stats/info about colleges and provide a good way for students to search through those stats to find good match colleges.