WSJ College Rankings

But I don’t think most people look at schools in the context of 2800 four year colleges. One positive of the USNWR rankings is that they did have buckets (National Universities, National Liberal Arts Colleges, Regional Universities, etc.) which I think worked reasonably well and I suspect that is potentially a first order of selection for many people. When viewing a “bucket” 40 spots could be significant, though much less significant than people like to believe.

Not posters here who say “top 20” or “top 50”. To me, there’s no difference between #49 OSU and #67 UCONN or #89 SUNY Buffalo or UCR - but to others - it means everything.

And how about the LACs or the non-PhD like CPSLO.

To most, 40 spots is like going from a Ritz Carlton on the beach to a Days Inn three miles away.

1 Like

So it is tricky because there are both national universities and national liberal arts colleges, and people often “cross-shop”.

But I think US News identifies something like 645 “national” schools–universities or colleges.

Is a difference in 40 spots significant?

I think that very much depends. The point I was making before is that between rankings compression, different individual costs of attendance, different individual educational and career goals, different non-academic preferences, and so on, it is plausible there will be many people who individually prefer the lower ranked school, perhaps so much so they don’t even apply to the higher ranked school because they don’t see it as a good fit for them.

A difference of 40 spots out of 645 isn’t going to rule that out, any more than when it was 2800.

1 Like

After spending some time reading through this list and looking at the new WSJ rankings (which are nonsensical) the cynic in me took hold and my thinking started to coalesce around the idea that this methodology was purposely designed to create noise and attention. ‘Let’s spice it up a bit with a new and fresh way of evaluating schools.’

They had to be careful, if the top schools in the rankings aren’t the usual suspects (because they are widely recognized as the tippy top) the rankings would be quickly dismissed and people would ignore them as junk.

However, once past the usual suspects everything goes and here we have rankings which are a complete hash with some schools skyrocketing, some falling, and some staying relatively stable relative to USNWR and other lists.

Johns Hopkins at 99 (9), Middlebury at 131 (40), Case Western 238 (51), and Tufts at 287 (30) means you are not being serious (even if it is presented as serious) because you are so far from the widely understood that you just aren’t credible; and this analysis isn’t credible. It is so radically different everywhere except for the T10, the one place that they need to be aligned in order to prevent themselves from just being dismissed. They are trying to say “everyone understands that the T10 are special but everything else was wrong.”

Absolute nonsense; the T10 aren’t that different from the T20 which really aren’t different from the T50. The “new” WSJ rankings say that everything the world knew is wrong and that just a short time ago they were wrong as well. The number in the brackets above is the schools previous WSJ/THE ranking which (surprise!) is directionally aligned with the views of pretty much everyone else.

However, if the WSJ wants to sell more subscriptions and drive attention (and here we are) directionally similar rankings are just lost in the crowd and not super interesting given the dominance of USNWR. How many people really pay any attention to the WSJ/THE rankings, Money Rankings, etc? But if the WSJ puts out some bat s!*t crazy piece of work out they will get attention and clicks every year, a gift that keeps on giving.

They put their disclaimer up front because they know that this is trash but the National Enquirer showed that there was good money in pedaling trash and it’s always about the money.

5 Likes

My family represents this remark. I think that most of us on here are generally on the same page. As we went through this process we put schools with directionally similar stats into “buckets” and looked for best fit within a bucket not the absolute rankings. Wesleyan and Skidmore were in the same LAC bucket and Rochester and WashU were in the same smaller research school bucket.

I agree but some might find a bigger difference between 60 and 100 then my kid did.

I will admit that rankings sway me a LITTLE bit. Not in the Top 25 or bust sense.

But I do take some comfort from choosing a university that according to most lists is in the top, say, 200ish, or a SLAC in the top 75ish (if in a separate bucket) out of 2800+ schools. I don’t think there is magic in the ranking specifics — does not matter if it is 40 or 140 — but I guess being in the top quarter/third suggests to me that a school is a “known” entity. And that maybe that will mean something in the job/grad school market.

Not that I would never look at a lower-ranked school. And some of them are perfect fits for specific students.

But, other factors being equal, as long as the cost of a college education is coming out of my pocket and not gifted to me, then I will favor a school with a good track record. One measure of track record is university in the top 200ish out of over 2800 schools on multiple ranking lists using different methodologies.

4 Likes

Yeah, and rather than me saying that matters or doesn’t matter, what I personally wish is for each kid to take control of all this. They can ask why there is that difference. They can ask whether that matters to them. It might. It might not. But it will be on their own terms.

In that sense I am not blanket anti-rankings, but I do think every single ranking is really just a collection of information that the kid can use or not as they see fit.

And I think if kids approach it that way, they maximize their chances of ending up really happy with their college.

2 Likes

This is why I like to think of schools as belonging to tiers. Within a tier, it’s really all about individual fit and preferences.
It’s silly IMO to think of a school rated #56 as being much less desirable than one at #46 because the latter is a “T50” and the former is not. Rankings are not a scientific measurement of “greatness”, and anyway, who said “10, 20, 50” are magical cut offs that separate the great/good schools from the also-rans?

To be clear: I’m not suggesting all schools are the same. Of course there are qualitative differences! My point is, there are tiers and individual ranks within a tier shouldn’t be taken all that seriously.

6 Likes

I agree sometimes college reputation matters.

I also think that while this is not always well-measured by a generic ranking, something that can matter a lot is that a college will remain a good choice even if you change intended paths in college. That doesn’t mean I would never recommend a specialty school, but for many kids I think a generally good school, the kind that offers a wide range and is pretty good at everything it offers, is a good idea in case their plans change.

Unfortunately, the way most publications do this, they just lump specialty schools and more generalist schools and all sorts of variations into one big category. But still, I do think if you are looking at a selective generalist school which has a pretty good ranking, it is a very good bet they consciously make sure they are at least pretty good at everything they offer.

This is all just supporting your idea of very big tiers. But knowing a college is within a tier where it is very likely trying to be at least pretty good at everything it offers is in fact useful information, I would think, to many people who are aware plans can change.

2 Likes

Here’s the raw data from the US Department of Education College Scorecard. Median Earnings (10 years after college matriculation for individuals receiving federal student aid)

1 Princeton $110,433
2 MIT $124,213
3 Yale $95,961
4 Stanford $106,987
5 Columbia $97,540
6 Harvard $95,114
7 Penn $112,761
8 Amherst $81,855
9 Claremont $97,174
10 Babson $111,604
11 Swarthmore $80,398
12 Georgetown $101,797
13 Vanderbilt $84,415
14 Lehigh $100,559
15 Florida $69,468
16 Duke $97,418
17 Rose-Hulman $97,688
18 Cal Tech $104,209
19 NJIT $84,495
20 BYU $74,630
21 Dartmouth $95,540
22 USC $89,884
23 Illinois Institute of Technology $82,793
24 Cornell $98,321
25 Northwestern $85,796

30 Davidson $77,379
31 Williams $74,473

45 Boston College $96,325

49 Pomona $74,305

137 University of Massachusetts-Lowell $65,324

157 Stonehill College $75,976

200 Boston University $80,582
223 Brandeis University $73,676
282 Mt Holyoke College $54,415
287 Tufts University $74,430

1 Like

Yes, but the elite schools are not going to have much of a graduation “bump,” as the high-quality students are very likely to graduate anyways…

Doesn’t CMC have a relatively preprofessional leaning, with (business flavored) economics being the most popular major, while Pomona is a more typical LAC in this respect?

2 Likes

The outcomes measured by the WSJ seem to be consistent then with that perception.

Again, the WSJ rankings use metrics that include so many generalizations and assumptions that they are useless. From the idea that the “benefit” that a student has attending, say, MIT is what they make, versus what they would make, on average, attending another college.

Except that a student at MIT is not attending MIT instead of attending an average of all colleges. They are attending MIT instead of attending a top private or a top flagship.

Because I refuse to pay money for WSJ, I do not know whether they did it by major, but I’m pretty sure that they didn’t.

I also understand that they are using the College Scorecard which, I’m sorry to, is BS for this type of ranking. To begin with, these only cover some 15%-20% of all students, and cover only students who receive Pell grants. that creates a biased data set, and also means that these rankings are useless for students who do not qualify for Pell grants.

Since the vast majority of WSJ readers are not of the SES that get Pell grants, that means that these rankings are not very useful for th ereaders of WSJ.

"Yes reader of WSJ in the top 20% by income, these are the best colleges financially for people who make less than 10% of your income, so I’m sure that they will be useful for you!

Then there is the dependence on voluntarily filled online forms for that data. That data is useless at best, and heavily biased at worst.

Moreover, their median income after graduation data comes from the 20% of the students with the lowest income, but they are using students survey data from students of all incomes.

So they are taking data from a small set of students, then creating a metric based on comparing the income data to income data from an alternate reality for those students. Then they add that data to a metric based on an extremely biased set of data from another, mostly different, set of students, and that Frankenstein’s monster of a ranking is their New And Improved Wall Street Journal Ranking.

Oh, I forgot to add the fact that they didn’t include the majors and they had different samples sizes for everything from different colleges.

They also used data from colleges if there were more than 50 responses. Any statistician will tell you that 100 responses is the minimum required to have meaningful results from a survey, and that 500 is much better.

They claim that “a majority” had more than 100, but a majority is anywhere between 50% + one colleges and all colleges but one. However, it is highly unlikely that, had more than 75% had more than 50 responses, they would have said that. “A majority” implies that there is a very large minority of colleges whose rankings are doubly useless.

Toss this is the trash heap of attempts at creating another ranking of colleges, based on the colleges that the creators of the ranking system like.

What a waste of time and energy.

It is clickbait, no more.

PS.

So this is how they calculated what the income of a students in an alternative universe in which they attended a different college.

I’m curious, which demographics? Also, because high school GPA is a great predictor of later earnings, it is free extra points for colleges which admit only students with high school GPA. They are getting “awarded” extra points for work that the student did.

The median salary of a student who had a 4.0 GPA in high school will be higher than that of a student who had 3.5 GPA, regardless of parental income. Yet this ranking assumes that a student who attends, say, Princeton, and had a 4.0 high school GPA is getting the same boost as would a student who had a 3.0 GPA. This is not true, and that means that Princeton, which accepts only students with close to 4.0 GPAs is getting credit for the work that these students did in high school.

For this ranking to calculate the value added by a college, they need to see what the earnings of the student would have been, based on the student’s application profile.

Your high school GPA could affect your income | ScienceDaily.

Finally, I am extremely curious how the WSJ too the Brookings ranking, based on value added by a college, and got their own rankings, which look very different (with wealthy a popular colleges on top, of course). The comparison is very appropriate, since the WSJ evoke Brookings in describing their methodology.

7 Likes

People just like rankings, and the WSJ is basically hopping on that wagon in a way to get some buzz (170+ posts here on CC). It’s just another list. In the end everyone has their own list using the criteria they value most.

1 Like

I have found these various rankings to be entertaining reading, although there is interesting data behind the rankings that is useful since it is in a consolidated format and what and how things are measured are meant to be consistent. It is silly to say Princeton is better than Harvard or UF is better than Berkeley or to rank a Michigan against a NE SLAC. However I am glad some rankings now are focusing more on outcomes. While money and financial ROI are not the end all, it is still a significant investment and knowing historical outcomes of past graduates needs to be a consideration for just about every family. Your kid may do better or worse, but it sets a range of probabilities.

WSJ in trying to measure relative outcomes is taking an interesting approach. Whether this is a case of garbage in garbage out, I can’t comment, but the theory is legit, comparing the actual salary outcomes vs expected outcomes based on student demographics in the data set.

The College Scoreboard collects data from students who receive some type of federally backed financial aid, not just Pell Grants and includes students in the various federal student loan programs. While still somewhat biased towards lower SES, it still covers a wider data set of post graduation earnings than other data sets and is not based on self reported survey data.

The survey data first of all only represents a 20% overall weighting. Agree that the sample size is weak, but since they are trying to measure indicias of student satisfaction, I don’t think you necessarily have to survey the same students from the earnings data group. In fact, even if you could, we are talking about surveying alums who graduated more than 10 years ago vs more recent grads. Not sure how valid a 10 year+ lookback is.

Yes, the tricky question of causation vs correlation. I could not find exactly what type of demographic information was used to set the hurdle. But at the very least, there is control for certain other demographic factors that tend to set the starting line. The Brookings study cites academic preparation,age, racial or ethnic background, family income, type of college, location of college and qualities of the college. The click through to the Methodological Appendix did not work, so I cannot see the details of those components. The Brookings study factored in certain qualities of the colleges themselves while the WSJ methodology looks like it does not factor in those qualities. That could be the source of the discrepancy that you noted. So for the Brookings study besides “to whom much is given, much is expected” from the student side, they also apply it to the schools.

3 Likes

In theory, if a school was concerned, couldn’t / wouldn’t they just drop the majors that seem to pull down salaries - ie and create a more major diverse but still somewhat focused Babson and RHIT like campus .

Given its focus on undergraduates, actually – for undergrads – Princeton may well be better than Harvard.

P’ton is the most undergrad-focused of the HYPSMCCP group.

I am basing that on the fact that Princeton is something like 67% undergrad, a far higher percentage than the other hyperelite universities. So the kids compete for prof resources/time with fewer grad students.

2 Likes

Ok, I’ll bite. Which are the “CCP” schools?

?