At a high level, to measure this you need a combination of a standardized measure of entering abilities/qualifications, and a standardized measure of exiting abilities/qualifications, such that you can evaluate the difference.
Both more or less exist in the UK, for example, which is why the Guardian can publish meaningful value-added measures down to the course. But neither exists in the US because of our complete lack of standardization on both ends. And I am not confident that problem has a solution, absent something like an AI figuring out how to turn the available non-standardized information into such entrance and exit measures.
So if all this was really transparent, one would think next-step gatekeepers (postgraduate schools, employers, and so on) would care most about exit measures. It may be nice for the individuals if the college adds more value to get there, but ultimately one would think next-step gatekeepers really just care about what they are getting.
In turn, this would mean a lot of high-entry applicants would rationally choose a college that was not particularly high on value-added as long as people like them still exited with the highest measure available. Like, if College A took 80s and turned them into 90s, and College B took 95s and turned them into 97s, an 80 might really benefit from choosing College A, but a 95 might benefit from choosing College B.
As it turns out, we can actually see this using the Guardian’s analysis of UK courses. Like, suppose we look up Economics. By average entry tariff, their standardized measure of entry qualifications, and value added, which then compares degree results to those entry qualifications on a 10-point scale, the top 10 by average entry tariff looks like this (average entry tariff/value-added):
Cambridge 224/4
St Andrews 221/6
Oxford 211/5
LSE 204/6
Glasgow 197/7
Edinburgh 194/4
Warwick 193/3
Durham 192/6
Strathclyde 192/2
UCL 188/6
None of those value-added scores are great. Some are quite low. But Cambridge, say, still enrolls the highest-scoring students.
Which is undoubtedly precisely because next-step gatekeepers value the exiting characteristics of Cambridge students, and UK undergraduate Econ course applicants know that. And so highly-qualified UK undergraduate Econ course applicants are rationally choosing Cambridge’s Econ course (if they can get in), despite its relatively meh value-added.
Indeed, in a way this is CAUSING Cambridge’s relatively meh value-added. Because with such a high entering score, it really cannot possibly add that much value.
Like if you look up 10s for Econ, there are two–Essex with a 113 entry tariff, and Brighton with a 105. These are your UK equivalents of taking 80s and turning them into 90s, and Cambridge is your UK equivalent of taking 95s and turning them into 97s. And it was likely literally impossible for Cambridge to add as much value, at least given the Guardian’s methodology, because there was not enough room to do that.
Again, none of this is possible in the US, at least without AI, because of our complete lack of standardization on both ends. But conceptually, this model likely still applies, meaning highly-qualified applicants in the US will undoubtedly rationally choose our versions of Cambridge over our versions of Essex.
But others will then benefit a lot from going to our version of Essex. They can both be great at their missions, but different applicants will rationally find different missions more relevant to them.