The Everlovin' Undergraduate-Level University Rankings

…but USNews, using the CDS, can get their # of grad and undergrad students and compute the S/F ratio themselves. Were they really that easy to fool back in the day?

And seminars – someone has to agree to show up. It might not foil the admins, but it’s taking time out of some people’s day(s).

As for financial “creativity” – guile, I guess. Bastages.

Regarding manipulation (university side) or misinterpretation (USNews side) this is the way I see it:

  • S/F ratios -- I believe Alexandre is right in that graduate/research faculty are often applied even if they don't do much regarding undergraduate instruction
  • Resources favors schools that have medical schools and lots of research. Lots of things can go into Instruction that aren't really instruction. This can give the impression for instance that research universities spend much more per student on undergraduate education than top LACs. I doubt that that is the case. University finances are a black box, unfortunately, which makes this easier.
  • SAT/ACT can be manipulated via favoring scores over more holistic admissions. If I recall correctly from the article on Northeastern, they also got USNews to omit categories like international students. Private schools have more latitude to do this. State schools would face more of a backlash if they rejected students with better records in favor of high SATs.
  • Class Rank - only about 30% or so report class rank now. At top high schools (e.g. Thomas Jefferson in Northern Virginia), you can have great students well outside top 10%. There is no real substitute for actually looking at difficulty of courses a student takes and the competitiveness of their high school, but USNews really isn't reporting on that.
  • Class size - read the article about Northeastern and see how they worked this. This can be done by capping class size, types of courses, etc.

@Alexandre - “Definitely. That is the point of “manipulation”…it is indeed dishonest. But it happens when existential threats lurk about. How else does a university’s financial resources rank leap dozens of spots in the period of 2-3 years?”

If what you claim is true, namely, there’s been much dishonest “manipulation” going on, then how do you explain the remarkable consistency in the USNW rankings from 2008 to 2017 as someone has already pointed out? If your claim is true, we’d expect all sorts of messy inconsistencies each year, no?

Check out Forbes late April ranking for colleges worth every penny.

-’

And another most recent Forbes ranking

Ten Expensive Colleges Worth Every Penny

  1. AMHERST
  2. Dartmouth
  3. Williams
  4. UChicago
  5. Tufts
  6. Colgate
  7. UPenn
  8. Columbia
  9. Hamilton
  10. Vassar

https://m.forbes.com/sites/nataliesportelli/2017/04/26/10-expensive-colleges-worth-every-penny-2017/?c=1&s=OnCampus

“…but USNews, using the CDS, can get their # of grad and undergrad students and compute the S/F ratio themselves. Were they really that easy to fool back in the day?”

The US News was not fooled. They don’t care. They just want to sell magazines.

http://finance.caltech.edu/documents/394-cds2015_final.pdf

Go to page 23. Caltech says it has 983 students. That is their undergraduate student population. It completely omits the 1,200 graduate students. Caltech’s true student to faculty ratio is not 3:1, it is 7:1.

http://www.upenn.edu/ir/Common%20Data%20Set/UPenn%20Common%20Data%20Set%202015-16.pdf

Penn lists 9,500 students in its ratio. Again, that’s just its undergraduate student body. It omits 2,300 Wharton graduate students, 1,500 SEAS graduate students, 700 Nursing graduate students and 2,100 graduate students from the School of Arts and Sciences. That’s a total of 6,600 graduate students left out of the equation that should have been included. Penn’s true student to faculty ratio is not 6:1, it is 10:1.

“If what you claim is true, namely, there’s been much dishonest “manipulation” going on, then how do you explain the remarkable consistency in the USNW rankings from 2008 to 2017 as someone has already pointed out? If your claim is true, we’d expect all sorts of messy inconsistencies each year, no?”

Not at all TiggerDad. Universities do not change their data annually. They altered them once several years ago, and they have not made any drastic changes to their script since then.

Manipulation doesn’t have to be “dishonest”. The former Northeastern president speaks forthrightly about it in the article I’d posted. He didn’t do anything illegal or unethical. He just focused the university’s resources and efforts on doing better on USNews as if it was a graded test.

There isn’t a whole lot of movement on USNews because many colleges are now focused on it. But there is some. Some universities have executed better than others. Perhaps this was because they were underperforming to start (Chicago), or put more focus earlier on USNews metrics than others (Northeastern).

I think of USNews as closer to an overall university rating (undergraduate + graduate + research). That is why the Niche data, which is probably more reflective of undergraduate experience, yields some different results. There are also types of schools (tough schools with lower completion rates, schools that are closer to LACs that do not have as much focus on graduate study) that are at a disadvantage.

@Greymeer your ranking looks interesting, but I’m not sure I understand the formula. Can you elaborate? Is it top to bottom in post #137.

I am not sure I agree IzzoOne. Manipulation is a form of dishonesty. Is it illegal in the case of university rankings? Of course not. There are no laws against lying about student to faculty ratios and financial resources. But is it misleading and unethical? I think so.

If it’s an undergrad ranking like they say it is, they could use the CDS to get a true® undergrad S/F ratio: undergrads/undergrad-teaching faculty. That info can be found in sections B and I.

Look at the test score ranking in #102 (or the US News, or the Forbes, or the Niche rankings). How many people seriously doubt that the schools near the top of these lists (as a group) offer objectively better quality than schools much lower down? I’m not talking about the difference between #18 and #28, I’m talking about the difference between the top 20 and the ones that don’t crack the top 100. If this isn’t true then why, over time, don’t top students gravitate away from these schools toward cheaper, less selective schools? They certainly have that choice (which less-qualified students generally do not). Tulip mania? I don’t think so.

The fact that individual college performance measurements can be manipulated doesn’t necessarily mean they are manipulated systematically, repeatedly, deliberately, and effectively. Niche, US News, and Forbes use rather different data and methodologies.Yet they all generate a similar set of top colleges (albeit not in exactly the same order). Most of these rankings are repeated annually, so if there is a one-off problem with an individual institution’s numbers, there are opportunities to correct it. A few colleges have in fact been called out. Yes, some schools cheat. Are we going to stop collecting EPA mileage data just because VW jiggered its numbers?

@tk21769 we fix the test so that VW can’t cheat the numbers and the test measures actual performance rather than manipulated performance. (This might also help make actual air quality in large cities closer to air quality predicted from pollution tests so that fewer people actually die or suffer health issues.) Nowhere in that scenario do we get rid of the EPA.

I think the purpose of this thread was simply to figure out if there is a better way of getting at actual quality of undergraduate education. So far, it sounds like some here are convinced USNews does that already. Others think some of the USNews numbers are not valuable or are manipulated, and there are some views (like perhaps Niche) that get more directly at undergraduate education quality.

@Alexandre

While there is some rare blatant cheating in US News, it’s a pretty rare occurrence. Systemic focus on the metrics, yes all the time. But your objection is based on the lying, not the optimization for metrics.

Companies do this all the time too. For example, many tech companies optimize for metrics that are seen as reflective of their effectiveness. They aren’t always exactly reflective of it though. Is optimizing for those metrics immoral. Granted, those metrics have more correlation than US News does to colleges in most cases, but we’re talking a very vague gradient in severity here, versus the line of lying about those metrics entirely.


The core problem here is pretty well discussed here: US News (or any ranking) doesn’t use metrics that most believe are fully reflective of school quality. The secondary problem is that collectively there is no agreement on a significantly better one. Reminds me of the saying about democracy being the worst system of government, except for all the other ones.

PhengsPhils, the lying is significant enough to alter the rankings significantly, since some universities actually report data accurately and honestly while many do not.

But lying isn’t the only problem with rankings. Another problem is using a one-size-fit-all methodology and formula, and the lack of proper calibration to adjust for important factors such as size (larger universities benefit from economies of scale and will naturally spend much less on a per/student basis), location (urban campuses in expensive cities require will pay higher faculty salaries) and affiliation (private universities are far more expensive than public universities and therefore must give out far more financial aid) to name a few…In other words, even if the data were being accurately and consistently reported, looking at them in an absolute sense would still not be telling.

What lying are you exactly talking about that has a significant effect? I am only aware of a few cases, most notable Emory in 2012. A vast majority do not lie on their reporting as far as I am aware, only optimize for specific numbers and find loopholes in the rules. The same thing that happens with taxes. It sounds like your definition of lying is different in this situation based on what I know. If you’re taking a stance that all of that is lying, that’s fine, but I think many would disagree with that definition, and it’s important given its moral implications. Colleges are working the system they are given, and while places like Reed are ignoring them commendably, I see that as a positive to them, not a negative to most other schools. For better or worse, rankings have an effect on colleges, and they have adopted caring about them because it improves their school, in many cases in ways US News does not directly measure.

As to the rest, completely agreed. However, good luck getting everyone to agree on it, let alone releasing a comprehensive ranking using it.

https://xkcd.com/927/

Rankings are simply never going to be perfect, or even great, unless you quite literally make it yourself. They still serve important purposes successfully though, even through all their flaws. And the diversity of ranking methodologies is a useful tool in itself. I think people like to fuss and jockey for position on CC using ranks, but in the end, they are a general guide to help gauge a school. I don’t know of any school where their ranking differs so much from where they would be if I made my perfect ideal rank that it doesn’t serve that purpose well.

PengsPhils, claiming to have a 6:1 student to faculty ratio when your student to faculty ratio is in fact 10:1 is lie. there is no way to sugar coat this. The Common Data Set is clear on this point:

“Report the Fall 2016 ratio of full-time equivalent students (full-time plus 1/3 part time) to full-time equivalent instructional faculty (full time plus 1/3 part time). In the ratio calculations, exclude both faculty and students in stand-alone graduate or professional programs such as medicine, law, veterinary, dentistry, social work, business, or public health in which faculty teach virtually only graduate-level students.”

@Alexandre I was not aware of that - skimmed the thread and missed those posts.

Am I missing something here? It seems to me that the CDS asks universities to not include graduate students when they say to “exclude both faculty and students in stand-alone graduate or professional programs”.

If all professors touch both undergraduate and graduate (we know this isn’t the case), then the 3:1 would be correct. No one likely has an accurate count on the grad only professors, but assuming those are minimal, those ratios are pretty accurate. Where is the lie?

Edit:

I see now what they mean by stand-alone, but it’s not hard to read that as “not an undergrad program”.

So what percentage of schools do this?

I believe @Alexandre is correct that some schools are reporting incorrect S:F ratios. Some schools (including at least some of the Ivies) seem to be excluding students in graduate arts & science programs, but not excluding professors in those programs (which are not “stand alone”). So yes, those colleges are inflating their S:F ratio performance … which counts for 1% of the USNWR national university ranking.

Does that poison the well for the rest of the ranking?
What about also addressing the allegation that schools like Princeton are “flooding their course catalogs with seminars”? This presumably benefits their class size distribution numbers. To me, this could be considered (as Bill Gates used to say about Windows issues) “a feature not a bug.” If it drives up expenditures and costs, some people would conclude it’s a feature they’d rather not pay for. It also may not be an efficient use of a distinguished scholar’s time to discuss What Makes for a Meaningful Life? with 15 teenagers for 3h/week. I suspect, though, that Princeton offers these classes from a belief they have pedagogical value (not to inflate its US News numbers).
Arguably this is the kind of feature that distinguishes a few truly “elite” schools from the rest of the pack.
http://www.princeton.edu/pub/frs/ay201617/

In any case, if you want to focus only on the bread-and-butter intro courses that many students care about most (such as pre-med courses) it may be true that Princeton’s class sizes aren’t all that much smaller than Berkeley’s/Michigan’s/UVa’s. Or, even if they are, once you get over a certain number of students in a lecture hall, it doesn’t much matter if the number is 150 or 300.

I think a lot of the manipulation is gaming the numbers vs misreporting. However, a number of colleges have gotten caught now misreporting. Here is quote from an article on how to game numbers:

“U.S. News’ Robert Morse has said there is “no reason to believe that the misreporting is widespread.” But a survey by Inside Higher Ed last fall suggests that even admissions directors are skeptical of the reporting, with 91 percent of those surveyed saying believe they believe there’s more misreporting than has been identified.”

https://www.propublica.org/article/the-admission-arms-race-six-ways-colleges-can-game-their-numbers