WSJ College Rankings

I personally have no problem believing they are a high value-added college in a financial sense.

1 Like

I agree that if you are from the state of Florida, any of the public schools are financially a smart decision. However, out of state, your cost is astronomically higher. Then, the return of investment is not nearly as good.
Again, this points to a misleading ranking.

2 Likes

I think you might find this interesting:

This study ranked colleges by student test scores, and then found high correlations to a variety of different rankings, including US News, reputational rankings, and revealed preference rankings. That was true even when those other rankings gave either little or no direct weight to student test scores. What this implies is a lot of seemingly different factors are mostly different facets of the same core thing.

You couldn’t quite do the same sort of study in the test optional era, but I bet if you could create an index of normalized GPAs in the way colleges do internally as part of their admissions process, it would look very similar.

Which makes sense, of course. Potential demand drives up application volume and yield. That drives up selectivity and enrolled student numbers. That drives up perceived quality/reputation/prestige. That drives up potential demand. And that’s a feedback loop. Measure any particular part of the cycle, or some blend, and you will likely get similar results.

It isn’t perfect correlations, though, and my two cents is the other important factor is value-added. It still fits into the feedback loop in the sense value-added will drive up potential demand. But by definition it is independent of enrolled student numbers (meaning value-added is definitionally a measure of relative outcomes controlled for student numbers, among other things).

And that explains why explicitly value-added measures freak out so many people. Big picture, they end up way more similar than different statistically. But people intuitively notice that residual difference as applied to various cases that are in the tails of the difference. And some then resist the idea that any sort of variation from a pure selectivity/numbers/prestige ranking is allowable. Because also by definition, prestige is tracking common perceptions of quality.

Anyway, point being there is in fact some science behind all that.

2 Likes

I don’t think it is “almost” any college ranking. It is every single one of them. All generic rankings of colleges fundamentally make no sense.

That is because each individual making a college choice decision is presented with their own unique set of value propositions by different colleges depending on their own individual goals, interests, preferences, net cost of attendance, and so on. Any generic college ranking must necessarily be based on some implicit “typical” profile of goals, interests, preferences, net cost of attendance, and so on, but the vast majority of people will materially vary from that profile in one or more ways.

The actual utility of generic college rankings for real world college choice is therefore always only in the underlying data, and it is entirely up to each individual to determine what data actually matters to them and to what degree. The ranking itself is almost never going to be something they would rationally use in making college choices.

And of course it is fine to point that out, but it should be pointed out consistently. There is no better or worse generic ranking. They all have the same fundamental and unsolvable conceptual flaw. So they are all really just relevant to the extent the underlying data is actually relevant to you as an individual, and never more than that.

Here is my solution for the ultimate college ranking system. Allow every college to have 5 to 10 “development” spots. The spots are awarded based on an eBay style auction system. The colleges are ranked based on the money raised. We can always subdivide by type and size of college.

1 Like

Correct.

It looks like Florida is about 88.5% in-state:

OK, so if you are talking about a financial value-added measure which looks at the median student, at Florida that is an in-state student. And even if you are looking at the mean student, it won’t be far off from an in-state student.

So I can easily believe any sort of financial value-added measure like that is going to rank Florida well.

But of course you are absolutely right that doesn’t mean any person in the country will experience the same financial value-added effect. And that is indeed an example of why generic college rankings really make no sense (see other post).

Still, if you view this as a conditional proposition–Florida is doing a pretty good job in terms of financial value-added for the typical in-state Florida student–then that could start becoming useful information . . . for someone who might be reasonably close to the typical in-state Florida student, and who is looking for a good financial value-added.

2 Likes

I would type this as a species of revealed preference ranking.

That is a fun subspecies of rankings. This is a pretty old article at this point (there is a 2013 version that to my knowledge is not available for free), but it sort of walks through how that can work:

One nice thing about those is they more easily fold in LACs. They also have some “surprises” but in ways that make sense. Like, Chicago had a bit of a revealed preference issue relative to its stellar academic reputation. This is not really a surprise, of course. Although Chicago has been working to address that, so it would be interesting to see if it had improved in an update of this same methodology.

Anyway, the other study I posted found a simple student numbers ranking had a high correlation to revealed preference rankings. Which makes sense for the reasons I mentioned in that post. But as usual, things like the Chicago anomaly can creep in when you have that sort of specific focus.

This ranking caused drama and made people even more eager for US news release next week.

1 Like

And I’m not a Florida homer - but their schools are great value OOS (forget ranking) relative to most other schools OOS - even full price.

I think Florida in state percentages are so high because if you’re from Florida or Georgia and a somewhat top student, why would you leave? The state does its best financially to keep top kids from leaving home - i.e. they go for dirt cheap relative to kids in most other states and while we may not see this all the time on CC, my hypothesis most kids choose lower cost schools.

Now many today think the state of Florida comes with a lot of baggage and I can respect that too.

Not homering for UF - just noticing they did well regardless of methodology Personally, I preferr FSU - I go to both cities for work and walk the campuses - plus my last kid applied to both and first just UF.

2 Likes

I am still confused. Is WSJ comparing like schools to get individual scores, and then lumping all schools together. (For example: Johns Hopkins graduation rate is being compared to Princetons?) Johns Hopkins received low salary and graduation rates, even though their salary and graduation rate is very high compared to all but the most prestigious schools.

1 Like

Not exactly, as I understand it.

What they are doing is comparing observed outcomes to a statistical model of predicted outcomes based on student demographics.

They are not selecting out a specific list of colleges that are similar demographically for comparison purposes.

So the implication is this would be because Hopkins had high predicted outcomes given its student demographics.

For a quick validity check, you might look at this article someone just linked in another thread, particularly the chart “Attendance rates at selective private colleges”:

Hopkins is actually not an outlier when you look at just the top 1%. However, in like the 60th to 99th range, it is quite high. Given the WSJ way of doing things, it is easy to imagine the resulting demographic profile making it hard for such a college to do well in this methodology.

To do a reverse validity check, CMU had a similar pattern. And CMU also did poorly by this methodology.

So it looks like the WSJ is using the approach they said they were using. What to make of that is up to you.

However, they are only using the salary from the kids that have federal loans, which is the same democratic. From my understanding, WSJ is using information from “College Scorecard” which only looks at students with federal loans. Johns Hopkins salary and graduation rate are both very high, yet they scored low.
In WSJ’s article, they even note, “salary impact vs similar colleges”

Assuming you mean demographic, this is not correct. There is a very high demographic that is less likely to have federal loans, but then federal loans are spread out over many more demographics.

Right, but here is the actual methodology:

Salary impact versus similar colleges (33%): This measures the extent to which a college boosts its graduates’ salaries beyond what they would be expected to earn regardless of which college they attended. We used statistical modeling to estimate what we would expect the median earnings of a college’s graduates to be on the basis of their demographic profile, taking into account the factors that best predict salary performance. We then scored the college on its performance against that estimate. These scores were then combined with scores for raw graduate salaries to factor in absolute performance alongside performance relative to our estimates. Our analysis for this metric used research on this topic by the Brookings Institution policy-research think tank as a guide.

Their label for this factor is a bit unfortunate, because unless you dig into that methodology statement, you might think they were selecting out a list of peer schools for comparison. But it is quite clear they are using a statistical model, and the only sense in which that reflects similar colleges is if you define similar colleges as those colleges with relevant similar student demographics.

1 Like

I get why they use kids with student loans - that’s data they can get.

But with many top schools being at no loans or just having filthy rich full pay parents and are more likely for success themselves - it seems they’re not getting a true representation.

5 Likes

Agree

The state of Florida university system is required to have an aggregate of 90% state residents

1 Like

I cannot believe that this has to be repeated.

None of the methodologies truly measures “value added”, because not a single one of the methodologies takes the student’s admissions profile into consideration. Studies that take the student’s qualifications into consideration have generally demonstrated that students with the same qualifications end up with similar earnings, regardless of the college that they attended.

The so-called “value added” of any college is nothing more that a function of three factors:
A. The number of applicants that a college has,
B. The applicant pool that the college has, and,
C. The most popular majors that the college has.

Colleges which get many applicants can select those with the highest potential for high income
Colleges whose applicant pool has a very large proportion of such applicants have this magnified, and
Colleges whose most popular majors are those which have the highest salaries are more likely to have graduates with high income.

These three factors are the most important factors in determining the median salaries of their graduates, yet the “value added” metrics in any of the “college rankings” only marginally take these into consideration.

While looking only at Pell Grant recipients mitigates the effects of having many students from very wealthy families whose success is assured, it fails to look at the qualifications of the Federal recipients.

In fact, the correlation between family income and income as an adult is almost certainly the result if the proportion of people from each income percentile who attends college. The fact that the Chetty study demonstrated that the correlation between family income and adult income becomes extremely weak for the graduates of colleges of selectivity level is pretty conclusive evidence of this, even without other studies which demonstrate this.

The differences in the constants of the Family income-Adult income regression is easily explained by the linear correlation between high school GPA and adult income (the slopes are practically the same).

The statistical model that would actually provide a much better estimate of “added value” is one that includes the high school GPA of the Federal aid students, or, better yet, their GPA percentile for their high school (or rank).

Studies which include the applicant profiles demonstrate that ANY college provides a good amount of added value to low income students, but that the selectivity of the college is not nearly as important as the raw data would suggest.

PS. The great irony of the WSJ rankings is that the most important benefit of colleges with low acceptance rates is not for the readers of WSJ. Academic support and being full need met are the biggest benefits of “elite” colleges, but these are not really beneficial for the affluent families of the readers of WSJ - they generally have no financial need, and can generally avail themselves of academic support privately.

1 Like

But even vague “tiers” or groupings involve assumptions about the validity of ordering criteria. When a school in one of my tiers is thrown way out of my grouping, I immediately disagree with the placement. I understand, and myself agree, that the discrete ordering of schools in 1, 2, 3 rank seems silly insofar as the pretense of pinpoint accuracy is ostensibly laughable. But it’s just a tiger with different stripes. If at the end of the day you don’t think people believe their internal ranking, whether by tier/group or otherwise, is right or correct, then we’ll have to agree to disagree. Hanging around this place is, to me, prima facie evidence that there are many who think they know objectively which schools are best, even when accompanied by the ubiquitous “fit and affordability” qualifiers. If you don’t believe me, start a thread here and title it “Is the University of Idaho generally better than MIT?” and watch what happens.

As to your point about what ranking services claim to be, my reaction, in the most polite sense, is ‘who cares?’ But I’m also not the guy who thinks commercials and marketing make us fat and overspend. Btw, I haven’t personally seen any ranking lay claim to be the best quite as bombastically as you describe. But of course they’re pushing their methodology so you’ll read and validate it. Nobody is going to publish a ranking and say, “Take our work with a grain of salt.” That’s for the reader to decide.

Does that extend even further? Middle school, grade school, preschool, committed parents reading to their children every night before bed? How about parents who plan their family, and save to support their kids? What is the root?

1 Like

I’ve asked myself this question, too. I’ve always thought UF to be a fine university, so I’m not trying to trash the school. But their movement is undeniable. Take my alma mater, the University of Washington. A tippy top research powerhouse with individual Top 10 / 20 departments up the wazoo which is perpetually stuck in the mid to high 50s in the US News rankings of national universities. It does better in the QS World Rankings for obvious reasons (and 100+ ranking spots higher than UF). In my day, nobody talked about UF being in the same academic range as UW (Washington or Wisconsin). But the US News places it significantly higher.

What did they do? Did their research take off? I don’t aske the question rhetorically. I honestly don’t know but am curious.

9 Likes