US News 2017 rankings

CaliDad2020 makes a good point when he writes that “most folks reading here know enough to look beyond the USNews rankings anyway.”
For example, USNews has been listing Princeton as the #1 University in America for five years and yet Princeton, to my knowledge, loses most of its cross-admits to Harvard, Stanford and Yale year after year. I like to think that this means that America’s brightest students are smarter than the people who manufacture the annual rankings at USNews.

@marvin100, sure—and what, in many cases, were they reacting to?

I’m not saying that everyone (or even essentially everyone) buys into the idea that the USNWR rankings are objective, thus reliable. I’m saying that there are a pretty sizable number that do.

You, however, seem to be saying that nobody (or essentially nobody) sees them that way. That’s a much harder claim to make. Am I misreading you?

@marvin100 I think most traditional undergrad college consumers believe USNews is more objective than you realize.

But who really cares? Morse and USNews aren’t going to change because I think their 23% peer assessment input is a bad metric and college goers aren’t going to stop referencing USNews because I think that assessment makes too much of the input a closed loop.

We are all playing inside baseball. With less than 50% of college students now pursuing the “traditional” college model, these rankings will matter less and less. Few continue ed teachers getting their masters so they can stay employed, etc. care where their school lands on the USNews rank.

I’d say the WSJ has some work to do on their methodology. They don’t rank Harvey Mudd in the top 500 colleges in the country. Gotta say, they take a credibility hit in my opinion.

It is possible that Harvard, Stanford and Yale offer more financial aid than Princeton, or that students are wary of Princeton’s supposed grade deflation. Those do not invalidate the US News rankings and may be why the Tigers lose more cross-admits to those schools.

I doubt it. In 2001 Princeton was the first university in the country to go to a “no loans”/“all grants” financial aid package and they still lost the cross-admits. And before that, until 1991 when the US Dept. of Justice charged the Ivies with conspiring to fix financial aid packages (and Ivy aid packages were substantially similar), Princeton still lost cross-admits to Harvard and Yale
Princeton is a top university but a problem with Princeton may be…well, Princeton. When Paul Krugman left in 2014 for NYC (CUNY), he cited “lifestyle” choices, preferring the amenities of a large city to those of a small town. I don’t think he was too concerned about “Princeton’s supposed grade deflation.”

@Prof99 I think students would chose between Princeton and Harvard, Yale, Stanford, etc, just due to personal fit. Maybe they like Harvard more—just because they it’s feel a better fit. It’s not like they’re choosing between Princeton and UConn.

I have in my hand a list…

I keed, I keed, but seriously: where are they? Who are these people? I haven’t seen any here or elsewhere. I’m sure there are a few, but having read threads about rankings ad nauseum and met thousands of college applicants and their parents, I haven’t found them to be a significant demographic. Why do you think they are? Do you just think so?

Well, both claims are pretty easy to make. I don’t think either of us is having a particularly hard time making claims. It’s supporting them that’s proving difficult (for you :wink: )

(I’m also still waiting for someone to support the claims that Morse or USN&WR has claimed objectivity. I’m not holding my breath, though, and it’s actually immaterial to my point, which is that WSJ’s rankings don’t fix anything and only serve to further muddy the waters, esp. since its “ROI data” claims are untenable.)

Well, there’s several, but Morse uses very, well, legalistic phrasing. So, for example, in one interview he said (in talking about the utility of the USNWR rankings

No, not precisely a claim of objectivity, but certainly a claim of quantitative rigor, which collapses in nonspecialist thought to objectivity.

My favorite, though is stuff like

This is, of course, a claim that the rankings are becoming ever more reliable, since its data is becoming more reliable—though this is an assertion without evidence everywhere I’ve found it from Morse (even though, in actual fact, [good reliable data of the sort he’s said would be good to have is available](Education Policy | American Institutes for Research), and has been for some time).

Among HYPSM, Princeton is the most focused on the undergraduate, based on % of all students who are undergrads – roughly two-thirds. The other four are all under 50%.

Yale is (a distant) second, followed by Stanford, MIT and finally Harvard. Something like 30% of all students at Harvard are undergrads.

This doesn’t mean that the other four schools provide an inferior undergrad education; it simply (to me) means that undergrads are not the top priority. How can they be, when they comprise less than half of the student pop? They have to fight over resources with all those grad students. And top profs are less likely to have time for undergrads if they are busy nurturing grad students. Also, student research spots are more likely to be within reach of undergrads at undergrad-heavy schools.

If that is not a metric in any of the university undergrad rankings, i think it should be: undergrad % of total student population.

Just a thought. :slight_smile:

Eh, neither of those is a claim of objectivity at all. It’s unsurprising that he’s boosting his ranking system and spinning it as well, but that’s not what’s at stake here. In this thread there are unequivocal claims that he’s said his rankings are objective. The stuff about cohort graduation rates and the Student Tracker service is interesting, but it has nothing to do with claims of objectivity. Still waiting for the evidence. @dfbdfb

In my everyday work life, I deal with the fact (yes, fact) that non-literal interpretations of meaning are often—probably usually—more important than what’s literally being said.

Basically, Morse is playing [url=“https://en.wikipedia.org/wiki/Implicature”]implicature[/url] games, which is handy for him, because it gives plausible deniability to arguments that he’s making claims that go beyond what he can actually make.

Doesn’t mean he isn’t actually making those claims, however, it just means he isn’t legally liable for them.

Sure, but it certainly doesn’t mean he is making those claims, as has been claimed in this thread.

@marvin100 Whether or not Morse specifically is making the claim to being objective, I believe US News wants to be seen as being objective and I think it is clear that Morse presents his data as if it were. If you look at the US News article from Sept. 12, 2016 on the “Best Colleges Ranking Criteria and Weights” co-written by Morse, the subheading is: “Find out which data are used in our undergraduate rankings and how they are weighted.”

http://www.usnews.com/education/best-colleges/articles/ranking-criteria-and-weights

While Morse mentions peer assessment in the criteria, nowhere in this article does he warn or suggest it might be a subjective standard. What Morse claims is US News “U.S. News uses these measures to capture the various dimensions of academic quality at each college.”

They continue: “The indicators include both input measures, which reflect the quality of students, faculty and other resources used in education, and outcome measures, which capture the results of the education an individual receives.”

I think an honest reading of the language used in the US News articles on their methodology sees that it is intended to suggest the rankings’ “criteria”, “data”, “ranking indicators”, “subfactors”, “input measures” and “outcome measures” “capture” a scientific assessment of the school’s strength that many if not most readers would take as objective.

The only place he use subjective-suggesting words like “opinion” is when he writes “Opinions of high school guidance counselors are only factored into the rankings of National Universities and National Liberal Arts Colleges” as if the “only” opinion in the rankings are that of the high school guidance counselors, and only in two categories at that.

Morse might be too clever, and perhaps too precise and honest, to claim objectivity where it does not exist, but Morse and US News certainly highlight the objective over the subjective in this most recent discussion of their methods.

Sure, @CaliDad2020 , I’d never argue that USN&WR wouldn’t prefer to highlight its objective metrics rather than its subjective ones–that would be stupid. Implication and spin, though, are not remotely the same as “claim to be objective,” and if you want to argue that USN&WR is pretending to be objective, then you’ll have to make the opening paragraph of its methodology page go away:

You can stretch and bend and do yoga (do you really thing “quantitative” is the same as “objective”? Because in my opinion that is 94.5% wrong :wink: ), but whether “Morse might be too clever” or not, he hasn’t said what you’ve said he’s said. And his site says the exact opposite.

@marvin100 I’m afraid that paragraph is missing from the Sept. 12 doc. Instead they claim:

“U.S. News uses these measures to capture the various dimensions of academic quality at each college.”

And

“The indicators include both input measures, which reflect the quality of students, faculty and other resources used in education, and outcome measures, which capture the results of the education an individual receives.”

I think the language used speaks for itself. A simple count of “objective” words and “subjective” words in that Sept. article would weigh heavily on the “objective” side - resulting in the perception I have mentioned.

If you really don’t think “quantitive” suggests objectivity more than subjectivity (it’s exactly 1 cup, I believe, in my opinion) then we just don’t have common vocabulary to even discuss this. We are just simply using the same words to mean very different things.

The ratings provided by university administrators and HS guidance counselors could probably best be described as an objective assessment of a subjective measure (i.e., their opinions). Those data are objective in the same way a good opinion survey is objective. The more important question should be: do we think a college ranking should be based on the opinions of those populations?

I personally think that reputation has a role to play in rankings, but they should only report reputation as a measure if they design their sampling very well. For example, they should RANDOMLY select the high schools they survey, rather than choosing which ones they ask, and achieve an extraordinarily high response rate (because 40%, in my opinion, is way too low). That said, I also think that 22.5% is way too heavy of a weight given the faulty sampling design.

@Gator88NE,

My point was that based on my experience of couple classes at UCLA, there’s nothing unique about UCLA as compared to UC Berkeley. I didn’t see fancy classrooms or interior. In fact, they were kinda shabby compared to the privates I went to. Also, the average salary of their faculty is about the same as UC Berkeley, despite the fact that UCLA has a medical school and UCB doesn’t. If they didn’t misrepresent, then I don’t know where the heck they spent that extra $20,000 per student on.