New USNWR rankings live now

At a high level, to measure this you need a combination of a standardized measure of entering abilities/qualifications, and a standardized measure of exiting abilities/qualifications, such that you can evaluate the difference.

Both more or less exist in the UK, for example, which is why the Guardian can publish meaningful value-added measures down to the course. But neither exists in the US because of our complete lack of standardization on both ends. And I am not confident that problem has a solution, absent something like an AI figuring out how to turn the available non-standardized information into such entrance and exit measures.

So if all this was really transparent, one would think next-step gatekeepers (postgraduate schools, employers, and so on) would care most about exit measures. It may be nice for the individuals if the college adds more value to get there, but ultimately one would think next-step gatekeepers really just care about what they are getting.

In turn, this would mean a lot of high-entry applicants would rationally choose a college that was not particularly high on value-added as long as people like them still exited with the highest measure available. Like, if College A took 80s and turned them into 90s, and College B took 95s and turned them into 97s, an 80 might really benefit from choosing College A, but a 95 might benefit from choosing College B.

As it turns out, we can actually see this using the Guardian’s analysis of UK courses. Like, suppose we look up Economics. By average entry tariff, their standardized measure of entry qualifications, and value added, which then compares degree results to those entry qualifications on a 10-point scale, the top 10 by average entry tariff looks like this (average entry tariff/value-added):

Cambridge 224/4
St Andrews 221/6
Oxford 211/5
LSE 204/6
Glasgow 197/7
Edinburgh 194/4
Warwick 193/3
Durham 192/6
Strathclyde 192/2
UCL 188/6

None of those value-added scores are great. Some are quite low. But Cambridge, say, still enrolls the highest-scoring students.

Which is undoubtedly precisely because next-step gatekeepers value the exiting characteristics of Cambridge students, and UK undergraduate Econ course applicants know that. And so highly-qualified UK undergraduate Econ course applicants are rationally choosing Cambridge’s Econ course (if they can get in), despite its relatively meh value-added.

Indeed, in a way this is CAUSING Cambridge’s relatively meh value-added. Because with such a high entering score, it really cannot possibly add that much value.

Like if you look up 10s for Econ, there are two–Essex with a 113 entry tariff, and Brighton with a 105. These are your UK equivalents of taking 80s and turning them into 90s, and Cambridge is your UK equivalent of taking 95s and turning them into 97s. And it was likely literally impossible for Cambridge to add as much value, at least given the Guardian’s methodology, because there was not enough room to do that.

Again, none of this is possible in the US, at least without AI, because of our complete lack of standardization on both ends. But conceptually, this model likely still applies, meaning highly-qualified applicants in the US will undoubtedly rationally choose our versions of Cambridge over our versions of Essex.

But others will then benefit a lot from going to our version of Essex. They can both be great at their missions, but different applicants will rationally find different missions more relevant to them.

2 Likes

Think of it as a beauty competition. Some judges decided this lady was a beauty queen. Ok, do you agree? Not necessarily. Some prefer brunettes, other blonds. Can the queen be a bit shorter or have more weight? Does smartness matter, how about her gifts? What weight more her smartness or kindness and work in the community…etc. This is the same. Do you always find this year’s Miss Universe appealing? Do you even care?
Or think about cars. Do you buy Lexus or not? Does it make sense to buy it? Any car as most colleges can bring you to your destination. Do you like comfortable cars? Are you willing to pay the premium?
There are no objective set criteria to put in to value something. People value different things.

1 Like

For purposes of salary, I think a better input would be a comparison between average salary of all college graduates in a state and graduates of that particular institution. This would compensate for col differences.

Are you saying that either Cal Tech or Harvard doesn’t have high peer quality? Or that one of them isn’t doing so great at getting their students to grow academically?

I don’t think it is either. If thinking of outcomes, I’d include things like:

• Percentage of students accepted to graduate school
• Percentage of students accepted to their top choice graduate school
• Median grad test scores as compared to expected (based on college entrance tests from incoming students)
• Percentage of grads passing licensing certifications (whether nursing, engineering, nutrition, etc)
• Percentage of grads employed in a field related to their major that requires a college degree
• Percentage of grads employed in a job that requires a college degree
• Graduation rate
• Graduation rate compared to expected graduation rate (based on profile of incoming students)
• Survey from HR departments at Fortune 1000 companies
• Percentage of loan principal remaining after 5/10 years
• Percentage of graduates who default on student loans
• NPV at 20 and/or 40 years (see A First Try at ROI: Ranking 4,500 Colleges - CEW Georgetown ), particularly if available by major area (humanities, engineering, social sciences, business, etc, as each area is not expected to have the same results

  • The Collegiate Learning Assessment (CLA) is a test some colleges give to its incoming freshmen and graduating seniors. It “measures critical thinking, reasoning, writing, and problem solving” rather than particular subjects like history, biology, etc (164). Unfortunately, although many colleges have students take the CLA, there aren’t many that share the results. In a study from 2005 to 2009, “36% of the students managed to graduate from college without learning much of anything” (164).

(the above are taken from my posts 2 & 4 over on this thread: Create Your Dream College Ranking Methodology)

Should I infer that you would rank U. of Washington and Rutgers lower than BC and Tufts because they had lower graduation rates?

According to the data provided at Washington Monthly, these are the 8-year graduation rates as compared to the predicted graduation rates based on percetage of Pell recipients, incoming SATs, etc. I’ve sorted them by who had the best improvement according to what the predicted outcome was.

  • U. of Washington-Seattle: 84% actual, 80% predicted, +4% difference
  • Boston College: 92% actual, 90% prediced, +2% difference
  • Rutgers: 82% actual, 81% prediced, +1% difference
  • Tufts: 94% actual, 94% prediced, 0% difference

Frankly, I’d be more impressed with the schools who are outperforming the expectations. And this is not necessarily a public/private college thing. There are public colleges mentioned in this thread that have negative percentages (like a -8%) and privates, too (-7% or even a -22%! difference). To me, over or underperforming tells a story about what is happening at an institution, and I don’t think that USNWR captures that at all. In fact, that’s what I’m getting to in my question below:

To clarify…would people rather have their students attending schools that are overperforming expectations and really pushing their students, but do not have the “peer quality” that some are concerned with, or would they rather go to a school that has the desired “peer quality” but that is really underperforming and not providing as much as desired in terms of academic growth (see the CLA example above).

ETA: Changed default rates language from “students” to “graduates” as I have recently learned that cohort default rate data is currently shared on all students who attended, including those who dropped out, and not just on those who received a diploma. For this measure, I only want those who’ve earned a diploma to count.

If inputs (quality of admitted students) is important, but we have less reliable data these days, we can use the “correlation is not causation” factor to good effect by measuring outcomes such as employment rate and starting salary by major, grad school and professional school admit rates weighting selectivity of grad and professional programs.

MCAT/LSAT/GMAT/GRE scores of the graduates!

1 Like

Indeed. This where all the Big Data, machine-learning stuff comes in. Colleges also have other information from applications, possibly their own internal tracking data, AO experience, and so on. You can throw all that stuff in the hopper and see if you can develop an AI which actually gets close to the same results as your human AOs, at least most of the time. But you are right it might take time to train this AI to that level, even if it is ultimately possible.

But once you get it there, it doesn’t have to be perfect. Like, say it does a good job 97% of the time predicting what a human AO would do when fast-tracking. OK, so then part of that fast-track process would be the human at the last step just making sure the AI hadn’t missed something important. If it looked off, the human could re-elevate to more human review.

But if this could save you a substantial number of human minutes of review in 97% of fast-track cases, that could add up into an enormous amount of conserved resources.

So we’ll see where this ends, but I can understand why colleges like Dartmouth are trying to make it work.

As an aside, this makes sense to me because in some ways the quality of raw information improves in those selectivity ranges. Like, it is basically impossible to use raw HS GPAs to make these decisions when so many applicants have unweighted GPAs in the 3.9 to 4.0 range. But if you are looking at, say, the 3.0 to 3.5 range, that is still somewhat of a problem, but much less of one.

So it makes sense some colleges like that could do this with procedures that did not require sophisticated AIs. For that matter, some public colleges actually had state regulations specifying how they would do this. So those were indeed the logical starting points.

But now these highly-selective private colleges are being flooded with applications, and simultaneously these technologies are rapidly developing, so . . . both the incentives and the tools may be converging for them too now.

I agree with you. I do think that the federal government could play a hand in this by requiring a universal type of pretest/posttest for any schools that receive federal funding (Pell Grants, research grants, etc) and then requiring the schools to share the data via IPEDS. The CLA would be one possibility, and I’m sure experts could suggest others. But I don’t think it’s an impossible problem without a possible solution.

That’s a reflection of the quality of the students though, at least as much as it is (if not more so) the quality of faculty/teaching. So I would suspect Harvard grads who take the GMAT will have a higher overall average score than Podunk State grads who take it.

2 Likes

LOL. exhibit A for why USNWR sucks, they did not include Liberal Arts colleges in “Best undergraduate teaching” rankings, which are known exactly for that! (I did not find a separate ranking even for LACs)
“It’s well known that there are many other colleges where students are much more satisfied with their academic experience,” said Paul Buttenwieser, a psychiatrist and author who is a member of the Harvard Board of Overseers, and who favors the report. “Amherst is always pointed to. Harvard should be as great at teaching as Amherst.”

2 Likes

sure. things like science ACT scores correlate well with MCAT scores. so of course students are part of it, but what they learn at college (particularly for MCAT) would impact results

1 Like

LACs probably should have had their own ranking.

I cannot find it for LACs under “best undergraduate teaching” on USNWR, which if true says a lot about USNWR’s rankings

I like your ideas for evaluating outcomes.

With respect to graduation rates, I do think we have to consider how successful the schools are in graduating their students in a reasonable amount of time as college isn’t free.

US News already is separately evaluating the schools for pell grant and first Gen performance over expected performance. If my kid isn’t in one of those groups, I care about how likely they are to graduate on time (I used six year graduation rates because they are easier to find, but the disparities between these schools increase over a shorter time frame). Not all reason for delay is student circumstances, often, perhaps more often, it is a function of class availability.

1 Like

I agree. In particular, I agree public colleges should be forced to engage in some sort of reliable value-added tracking, because that is essential to their mission.

With private colleges, I certainly think it is legally possible for the federal government to do that, but I think it would make the most sense for them to require that for students actually receiving federal aid. For others–eh, I am not sure it makes sense to force that, even though I would love to see the data.

Isn’t it this measure?

https://www.usnews.com/best-colleges/rankings/national-liberal-arts-colleges/undergraduate-teaching

5 Likes

And less that half of CA high schools rank. (if I recall from an article in the LATimes several years ago)

1 Like

thank you, I forgot to look for “national” LACs… I didn’t know that existed as its own category.

It is an impossible problem because it would create a political problem. Once you start ranking schools in this manner and regional state schools in certain regions start consistently scoring low you have created a political problem and all that comes with it. I am not saying that this would be bad, it would draw harsh light on the inconsistencies of our K-12 system which is where so many of our challenges originate. I do believe that this is also what makes a federally mandated ‘exit test’ an impossibility.

thanks again.

would be nice if there was a way to compare “national” LACs with the Universities in the USNWR scoring…