US News Rankings have a Seriously Negative Bias

First and foremost, USNews uses the college rankings to simply sell their product. They know that every year their ranking issues will bring in money. That’s why they started ranking colleges back in the 80s. They wanted in increase their sales. So yes, they’re out to make a buck.

The “peer review” portion of the ranking calculation is a negative bias. Peer review should not factor into ranking the quality of a university. They are asking people to rank other universities, and then they use their “opinions” to make HUGE statements about the quality and distinction of a university. That’s ridiculous. It’s like asking people in California to rank Whataburger vs In-N-Out, and Texans to do vice-versa. People are bias, and they will always rank according to familiarity.

As for college board, I was just venting. But seriously, it’s only out to make money. They nickel and dime on every level from prep books to score send-outs. I could go on and on about how awful college board is, but that’s not the purpose of this forum.

There is nothing inherently wrong with expert opinion as part of a model. Especially composite expert opinion.

1 Like

Another interesting UT Austin related observation. Pick an academic category…Engineering, Computer Science, Geology, Accounting, etc. Without fail, UT Austin is ahead of Florida in every single category I looked up. I went in to the double digits, and I gave up. However, UF is ranked much higher as a collective institution. I would like to hear/read a coherent justification of this.

2 Likes

That’s it. We must accurately rank schools. For science. To do so, I propose a representative sample of the top 25% of all college bound HS students across all racial, geographic and economic lines. 500,000 students should do. These students will be given a 4 year zero cost education at any school to which they are admitted. This would include free transportation to/from home, housing for internships, etc. This removes all financial hurdles.

  1. They apply to schools and get accepted/rejected. Easy enough to record those decisions to come up with an adjusted acceptance rate on an apples-to-apples basis. To keep things balanced, some apps are randomly sent to schools. Timmy doesn’t need to accept appointment to the Naval Academy, but we might force him to apply.

  2. We examine the revealed preferences of each student when weighing admissions offers. They’ve gotta pick a school.

I propose that we begin this study for the incoming freshman class that aligns with the age of the oldest child still living year round in my house. Then we repeat the study exactly three years later when my next oldest child will be a HS senior.

In the name of science, I will offer my children as test subjects.

Yeah but you ignored Tom Pettyology. That’s gotta be worth more than all other majors combined.

But seriously, I wouldn’t worry too much about comparing any two schools. And anybody who picks Florida over UT based on a ranking is doing UT a favor by not attending. The same applies to Florida if the rankings and situation was reversed.

There’s nothing inherently wrong with it, but there are limits. Asking a college administrator to rank 100+ schools in a boring survey is like asking a college football coach to rank the entire FBS. Except even worse, because schools don’t compete head to head in a random schedule where we can assess score. And we don’t have an ESPN top 2,000 math geek recruiting ranking where we can see where all those kids go.

1 Like

In theory, there is nothing wrong with an expert option, but it’s important that the “expert” be knowledgeable about the things that they are ranking and have well defined criteria on how they are supposed to rank them.

For example, the Sienna presidential rankings “study” at US Presidents Study – Siena College Research Institute has experts rank each of the 45+ presidents in 20 well defined categories. The study compiles an average ranking in those categories and a composite ranking. The “experts” are historians, political scientists, and others that are familiar with the accomplishments of each 45+ presidents and are well qualified to ranking them in the well defined criteria. The experts are knowledgeable about each thing they are ranking and are given well defined criteria on how they are the supposed to rank them.

In contrast USNWR, sends out a survey asking college administrators to rank hundreds of colleges on a scale of 1 = “marginal” to 5 = “distinguished.” The college administrators asked to fill out the survey are by no means experts on all the hundreds of colleges that they rank. They probably are only well qualified to rank only a small handful of them. It’s my understanding USNWR also does not provide a good definition of what “marginal” and “distinguished” means, so the administrators who are knowledgeable may not know how they are supposed to evaluate how “distinguished” and “marginal” particular colleges are.

I expect the end result is largely a circular feedback. The college administrators who choose to submit their USNWR survey largely base which colleges are “marginal” and “distinguished” on the colleges’ USNWR rank, particularly for colleges for which they are not familiar. If a college is higher ranked in USNWR, it is likely to be identified as more “distinguished.” The high weighting given to the survey in USNWR rankings, cements both the high and low ranked colleges respective placing.

1 Like

UT and TAMU receive boosts from their peer assessment ratings. That is, they place higher in the rankings than they would if PA ratings were not included in the methodology.

To illustrate the point, UT would place tied for 23rd in its category if its ranking were based entirely on its peer assessment rating.

1 Like

The problem with USNWR ranking isn’t its bias, or its arbitrariness. All rankings are biased and arbitrary. The problem is the harm it brings to college admissions, and more generally, to higher education itself. The cost of the damage it causes far exceeds any monetary gains USNWR reaps. It boosts the desirability of some colleges relative to others, making admissions to some colleges artificially competitive. It sold many students and their families the fallacious idea that college selection process is as simple as a single number, the position in its rankings. It distorts how colleges are run, making them respond to what USNWR considers pertinent rather than what they themselves consider to be important.

2 Likes

Under a fair evaluation, Reed would place 36th in its USN category by one estimation: Reed's USN Rank Estimated at 54 Places Higher Under a Fair Evaluation.

However, Reed needs to take notice of its admission yield, which, at 15.3% last year, may suggest cause for concern.

3 Likes

UF is less expensive than UT for OOS students.
UT’s middle 50% ACT is 26-34. UF’s was 30-33. and is now 30-34.

Maybe there are some graduation rate differences or class sizes?

Most administrators who receive the USNWR peer survey throw it in the garbage…only 36.4% responded in 2020 (the last survey).

For those who do respond, I have heard/read multiple administrators state that they give the survey to staffers to fill out…typically someone in enrollment management, sometimes even an entry level AO. It’s ridiculous to think, as you point out, an average enrollment management staffer knows much about many schools and even if they did, results would still be questionable due to poor definitions of each of the scale points.

1 Like

USNWR publishes the weightings used in their ranking formula, and it does not consider t he academic categories that you listed, so which college is ranked higher in geology, accounting, … has no direct influence on the respective college’s National USNWR ranking. The highest weighted criteria in the National ranking are as follows:

20% – “Marginal” / “Distinguished” Survey
18% – Graduation Rate
10% – Financial Resources per Student
8% – Class Size
7% – Faculty Compensation

Comparing UT and UF in these categories. UF appears to lead in 3 of the 5 listed categories – graduation rate, class size (according to USNWR… other sources differ), and faculty compensation. If you subscribe to USNWR, I expect they’ll provide a more detailed review with their real numbers.

“Marginal” / “Distinguished” Survey – UT = 4.1, UF = 3.8 (old, don’t know current)
Graduation Rate – UT = 83%, UF = 88% (as listed on USNWR)
Financial Resources per Student – UT = $78k, UF = $37k
Class Size – UT = 37% <20, 24% >50; UF = 53% <20, 10% >50 (as listed on USNWR)
Faculty Compensation – UT = $148k, UF = $159k

2 Likes

I definitely see these as potential issues, but a few things that suggest maybe they aren’t as harmful—or at least they shouldn’t be as harmful—as is being suggested:

  1. when we look at who is going where, this doesn’t really apply to 75+% of incoming freshmen. People don’t apply to their local completely average LAC because of a ranking. Or their directional state U. Or even their state flagship. Or their religiously affiliated school. Or the school their friends/family went to. This really only applies to the small proportion of students who happen to be above average students applying to many, many colleges doing a lot of comparisons.

2-I do agree that what US News quantifies is not necessarily perfectly aligned with how a given school would choose to optimize its resources. However, when we consider what broadly makes a school “good”, resources probably do play the biggest role.

3-what is astonishing to me is that as much as schools complain about these rankings, they have done roughly zero to combat them. I’m not saying they should boycott. But people instinctively love to rank things. No one is going to change that. Best colleges, best cities to visit, top 50 basketball players of all time, presidents. These enormously wealthy schools—with all of their scientific and statistical experts—have basically done zero while a tiny shell of a company (US News) has been allowed to corner the market on “the” rankings.

If you want to reduce the importance of those rankings, the easiest way to do that is to create your own. Not just one, but several different rankings using different metrics. Credible rankings that make sense that muddy the waters to eliminate the reliance on any one ranking. Create more sophisticated tools that allow students to personalize rankings for their situation. So much brain, financial and computing power there and we get…nothing.

They could do so, so much. Open vs core curriculum requirements depending upon your field of interest across schools. Walk and bike scores or a series of “pick between two pictures” to identify preferred settings rather than generic urban/rural/suburban designations. They could track graduation rates and outcomes by SES, race and HS GPA/test scores to disclose how students most similar to you fare (grad rate, percent going to grad school, etc). Percent of people taking advantage of internships/research opportunities. Percent doing 1 on 1 work for capstones/honors theses. Income of Pell eligible students x years out. Club sports, dietary restrictions, nearest synagogue, percent of data that are sunny during the school year, faculty turnover, percent of students who study abroad for at least one semester, percent who won prestigious grad scholarships, percent going to med school within x years of graduating. Similarity scores based upon criteria important to a given applicant. Flags that indicate, “if you aren’t so particular about [most restrictive metric] here are a few other very similar schools to your top choices”.

If these rankings are harmful and trash, then the bar is not that high to produce things that would render them obsolete. They have an incumbent advantage, but it’s not that strong. Some of these things would take time/resources to count, but we’re talking about the collective talent and resources of academia here.

I mean, if they really wanted to, they could by the college ranking business line from US News. If it’s causing more collective damage than USNWR’s ability to monetize, that’s the obvious answer. Then let university stats junkies run wild: typology clustering by school and students. Bespoke rankings, etc.

Nail squarely on the head!

Gladwell’s thesis, and it’s not new, he wrote about it 10 years ago in the New Yorker, is that ranking methodology has to be individualized. He uses a great example, ranking cars. The order depends completely on the qualities one prioritizes. The Suburban and the 911 are both great vehicles. They do completely different things. How can one be “better” than the other on a single, unified list? It can’t.

Even worse, the USNWR Engineering methodology is based 100% on institutional reputation, typically formed by the research produced. It says nothing about the undergraduate experience.

Caveat Emptor.

4 Likes

The poker player in me would suggest that, rather than complain about the weaknesses of the ranking systems, we exploit those same weaknesses to our advantage - find the schools that fit our criteria and are under-rated by USNWR. They should be easier to get into, possibly provide more merit aid, and provide a better educational experience than many higher ranked schools. Lemonade from lemons. We just have to swallow our pride and value educational quality over prestige.

8 Likes

@RockySoil That was our decision with our oldest daughter, and we haven’t regretted it for one moment.

There are lots of “hidden gems” that are (imo) under-ranked by USNWR which provide excellent educations with very good to great merit offerings. Not even hard to find, since many of those schools are ranked from 75 - 250 on USNWR rankings…still excellent in the ranking if you put the rankings in proper perspective (over 4000 colleges/universities in the US).

4 Likes

The opportunity comes from considering outcomes vs. resources or student quality. I’ve found that many of these lower ranked colleges don’t have good information on outcomes.

Maybe a school that has a relatively low endowment because they’ve been spending it on resources and financial aid? Maybe one that doesn’t have Division I athletics that need $20 million or more per year in institutional support?

A more academic critique of what should be measured is attached. Regardless of method, any rating/assessment system will run into a circular biased sample issue.

Example: assume the statement “student engagement is the most important thing” to be true. The most prepared, academically-oriented students tend to be the most engaged. The most selective universities tend to have the highest proportion of those students, so at the college/university level, those institutions tend to have the highest rates of engagement.

But: this won’t tell you how engaged an Ivy-Bound kid would be at a Rutgers or Maryland Baltimore County or Ball State or Marietta compared to institution of attendance Y.

The same problem applies to value added metrics. Measuring value for someone x years out of Yale is difficult as it is. What is value? And even if we could agree upon a definition of value, measuring the value and difference between Yale and Miami of Ohio for a particular student is not possible. The best you can really do is to create typological clusters of students by SES/race/field of study to track over time. You run into sample size and data collection issues. Especially for small schools. But this would give a rough indication that “students like you tend to get the most value at these institutions when we narrowly define value as X.”

https://www.aacu.org/publications-research/periodicals/assessing-quality-higher-education

1 Like

It’s already been well established that outcomes are baked in at the time of HS graduation. Strong students do well, for the most part, no matter where they go. The slight exception ar FGLI students. The schools with a higher percentage of good outcomes attract a higher percentage of higher achieving students.

Those metrics are HIGHLY major dependent too, but the masses want to generalize to the impact of the institution.

3 Likes