@timetodecide12, I prefer alumni results over revealed preferences because revealed preferences assume that high schoolers are perfectly rational processors of information (and heck, we know that even sophisticated stock investors aren’t, overvaluing and undervaluing stocks all the time). Also, you need a good unbiased data set for revealed preferences to be meaningful. For instance, if a data set is skewed towards a certain region (which I’m pretty certain the Parchment user base is), the revealed preferences would be skewed as well.
“When a ranking doesn’t have Harvard, MIT, or Stanford in top 5 it’s hard to take it seriously.”
MIT isn’t in the top five at USNWR. Therefore, USNWR should not be taken seriously.
Admission rates are in fact a measure of popularity whether you accept it or not. The way to get lower admission rates is with higher numbers of applicants, i.e. more popularity.
You may believe that low admission rates are good indicators of other fine qualities, but popularity is what they measure.
How has U of Chicago shot ahead of peer institutions in admission rate in recent years? They went to the common app, dropped an anti-advertising policy, hired a new hotshot marketing director, and then went to the UCA. How did these steps improve education at U of Chicago?
And why did Chicago take these steps? Did they do it to improve learning or for the betterment of their students? No, they did it to chase USNWR rankings. They gave in to the pervasive stupidity that elevates this balderdash to something meaningful.
“Since 2001, the median marketing spending at small, medium and large colleges and universities has increased between 60 and 100 percent (adjusted for inflation).” - from 2013 CASE report on college marketing trends
“Private colleges spent the most to bring in new undergraduates in 2012-2013, spending $2,433 per new student at the median vs. $457 per new student … at four-year public institutions … Private colleges staffed their admissions and recruitment offices at the highest levels. For example, at four-year public institutions, the median ratio of new student enrollees to full-time-equivalent (FTE) staff was 111:1, but at private institutions, the ratio was 31:1.” Noel Levitz, 2013 Cost of Recruiting an Undergraduate Student Report
Harvard spent about $25,000,000 on advertising last year to enroll 1,665 new students, equaling a rather impressive $15,000 per enrollee.
The reason your silly party school scenario doesn’t play out is that the party school can’t keep up with the rich marketing spending of the fat cats.
At some point perhaps the advertising budgets of the richest schools will get so high that every high school senior in America will be inspired to apply to each of the elite colleges you treasure. Then they can all have admission rates in the neighborhood of 0.001%, and they can congratulate themselves on what a fine job they have done of improving higher education in the U.S.A.
Just like companies, schools can be over and under valued, driven by hype and marketing, as @BobWallace pointed out, so UChicago, despite offering the same education and opportunities of a decade ago, has seen its admit rate drop precipitously.
Which is a shame if you’re applying to colleges, actually, as there are now no Ivy/equivalents (http://talk.qa.collegeconfidential.com/discussion/1682986) that are not a reach for any nonhooked applicant (while UChicago was very undervalued by high schoolers and so relatively easy for a top 1 percentile student to get in to a decade ago and offered as good an education and opportunities then and now as the Ivies).
The market does correct away inefficiencies over time, which means that UMich, which is/was the last near-Ivy to be able to serve as a safety for a top 1 percentile (OOS) student, is on the brink of not being able to any more, with its latest admit rate being 26% (lower than that for OOS).
In 5 years, none of the Ivies/equivalents and near-Ivies will be safeties for any unhooked applicant and a top 1 percentile student would have to look in the next tier down (UW-Madison, McGill, the ancient Scottish unis, Case, Rochester, RPI, USC, NYU, and some LACs) for safeties, though it’s possible that none of the American ones (other than maybe UW-Madison) would be by then. Or unless they go to a CC and transfer, which may actually be an easier way to get in to unis like Cal/Columbia (GS)/UMich/UVa/UNC/USC/UCLA/NYU/UT-Austin than straight out of HS.
Blame that on USNews (rather, the people that give it significance and power), as it provides an incentive for schools to game their fall freshmen profile and admit rate.
Not exactly. Their President decided that to improve the long-term financial health of the university, they needed to increase the size of the College.
http://www.bayarea.net/~kins/AboutMe/Hutchins_items/CampusReportOnline_UofC.html
Once a growth strategy was set in motion, chasing better rankings (for better or worse) may well have become an important element in the marketing plan.
To believe that a market is rational, you don’t have to believe that every individual participant in it is perfectly rational. Admission selectivity (not just admit rates but average stats) reflects the collective choices of the best students (and everyone else involved in their decisions). It is a fairly good indicator of college quality. The US News rankings reflect that (even though it only counts for 12.5% of the score). Certainly one should take with a big grain of salt the distinction between, say, #25 USC and #29 Michigan. The resolution is way less precise than the one-up rankings suggest … but the overall outcomes aren’t random, and they tend to be corroborated across different rankings.
Malarkey. Is a random opinion piece from 1999 all you can produce to back up your nonsense?
Here’s the real story:
http://www.personalcollegeadmissions.com/getting-in/the-great-success-of-the-university-of-chicago
I don’t disagree with the sentiment expressed in the last paragraph you cited. There are excellent schools that “admit thirty or forty percent or more”. Reed College, for example … though it also gets average test scores comparable to Middlebury’s or Berkeley’s. It is more selective than its admit rate alone suggests.
This is an exchange of opinions. My opinion is that admission selectivity is a fairly good indicator of college quality. That isn’t exactly a wildly controversial idea. Apparently you don’t share that opinion … which is fine. What do you think are metrics that better reflect undergraduate college quality, and that US News is missing?
“I prefer revealed preference rankings. However, US News is a lot better than it gets credit for - people take it seriously because it reflects reality. When a ranking doesn’t have Harvard, MIT, or Stanford in top 5 it’s hard to take it seriously.”
Revealed preference is about as meaningless as trying to decide whether I prefer chocolate or vanilla by seeing whether other people prefer chocolate over vanilla. How insecure one must be to base your own preferences not on what you like, but on what other people like.
As for taking USN seriously - no one obligates you to take them as anything other than a general guide.
And how do you “know” Harvard, MIT, Stanford belong in the top 5 or else a ranking isn’t valid? That’s just saying that you think they always should be, and not being open to new data. Perhaps on certain measures they aren’t in the top 5. So?
No one asked me, but I come down on the “admit rates are too much about popularity” side of the argument. Personally, I think that incoming class test scores (ideally combined with GPA if available) are a better measure of “selectivity” or “excellence” or whatever you want to call it than admit rates.
Admit rates are (my opinion) hopelessly contaminated by yield rates, which are both subject to artificial manipulation by schools (ED) and otherwise rather a reflection of reputation than anything else (USNWR rankings, etc.) You can make the case that reputation is an important factor, I suppose, but I’d rather measure something more concrete. If a school consistently gets higher stats kids to attend, that’s a measure of something truly useful.
When you look at the admit rates of the Ivy League schools, especially the “lower Ivies” - you can see that reputation drives yield rates higher than comparable schools, and that drives admit rates down into the single digits. (And then, the cycle of self-reinforcement kicks in, as the college goes up in the rankings, and draws even more applicants, driving admission rates down farther, rinse and repeat…)
“Certainly one should take with a big grain of salt the distinction between, say, #25 USC and #29 Michigan. The resolution is way less precise than the one-up rankings suggest … but the overall outcomes aren’t random, and they tend to be corroborated across different rankings.”
What other rankings besides USNWR, have USC over Michigan? Or at least a dozen or so other schools over Michigan?
@tk21769, and I don’t believe that either the stock market or college admissions market are perfectly rational. Do you?
If you do, they you have to explain the excess volatility of stock prices (and why prices don’t reflect long-term returns for all companies; that is, why some are over or undervalued). For college admissions, you’d have to explain why the admit rate for UChicago has dropped like a rock over the past decade when the education and opportunities there haven’t changed. Mind you, there’s even less reason to think that the college admissions market is rational considering that, unlike the stock market, there is no way for participants (high schoolers) to short an overvalued school or an incentive for them to do so.
Forbes ranks 23 research universities (and 44 colleges including RUs and LACs) higher than Michigan
(although USC is not among the schools it ranks higher. )
Parchment ranks 23 research universities (and 37 colleges) higher than Michigan.
USC is among the schools it ranks higher.
So these 3 rankings (USNWR, Forbes, and Parchment) are quite close in their assessments of the University of Michigan.
(~23rd among RUs, or ~40th overall, is a very high ranking. We have 50 states and over 2k 4 year degree granting institutions in this country.)
Note, once again, that Parchment isn’t a scientific sample and is very likely skewed geographically.
And take in to account that it’s a revealed preferences poll of high schoolers, with the issues associated with that mentioned earlier.
Perfectly rational? No, I don’t believe they are.
Tulip Manias happen. However, I don’t think they are easy to sustain, year after year and generation after generation, without some real underlying value.
I’m open to persuasion that generations of HS students, GCs and parents have overestimated the quality of the 8 Ivy League colleges. But what’s the evidence for that? What measurements are USNWR, Forbes, Parchment, College Factual, etc., all missing? What common factor is confounding their results? Why are different measurements pointing to rather similar sets of top ~20 schools?
@tk21769, you still haven’t explained: What major change in the quality of the UChicago education caused its selectivity to rise so sharply over such a short period of time?
The Ivies may have been (mostly) fairly valued in recent decades, but that doesn’t mean that all schools are fairly valued. Some may be over or under valued, judging by admit rates, some ranking, etc. (sometimes severely).
To use an analogy, someone says “analyst Bob isn’t valuing company A correctly”, and your retort is “well, he’s valuing companies T-Z correctly”. Even if I agree with that, that doesn’t mean that I believe that analyst Bob is correct about company A or is using the best valuation tools for valuing a company.
You propose a false dichotomy. Ranking colleges is an asinine activity, and I do not accept that it should be done at all.
Evaluating - not ranking - colleges should be done on the basis of outcomes rather than inputs. I would not evaluate a toaster based on the quality of bread put into it, I would evaluate it based on the quality of toast it produces, the reliability of the toaster over time, etc. (Bonus points, in fact, if it can do a good job of toasting all different qualities of bread.) The result of the evaluation should never be a single number or ranking, but a spectrum of results an individual can apply to his own needs/wants. Maybe I want to toast bagels only, in which case the toaster that is best for bread may be the wrong one for me. A single composite score or ranking loses important distinctions of this nature.
If I have no way of measuring the quality of the toast produced, that does not justify falling back on the quality of bread inserted simply because it is something I know how to measure.
Certainly if one takes a brief look at Consumer Reports toaster evaluations, there are always expensive fancy toasters that don’t make good toast and/or break frequently, and there are affordable no-frills toasters that can make good toast with high reliability. An USNWR style input-based evaluation would not predict these results, it would tell you to always buy the most expensive toaster with the fanciest features.
My question is how come every year we debate this? Almost no fail.
“Forbes ranks 23 research universities (and 44 colleges including RUs and LACs) higher than Michigan
(although USC is not among the schools it ranks higher. )”
Thanks for reaffirming my point.
^ Let’s repeat your question:
“What other rankings besides USNWR, have USC over Michigan? Or at least a dozen or so other schools over Michigan?”
We’ve cited 3 rankings (USNWR, Forbes, and Parchment). Two of the 3 have USC over Michigan. Three of the three show at least a dozen schools over (better ranked than) Michigan. College Factual’s “Overall Best Colleges” is yet another ranking that places USC (at #21) over Michigan (at #60). So whether US News represents Ground Truth or not, I don’t think it’s is a wild outlier in how it handles top public universities.
I agree, it would be desirable to measure outcomes. Attempts have been made to do that. They have their own issues.
Furthermore, HS students are not “toast”. Not unless we’re talking about a world in which generations of toast have been choosing their preferred toasters. Admission selectivity reflects student choices. A combination of low admit rates + high average stats means a college is successful in attracting many strong students (who after all are the students with the greatest freedom to choose other colleges). That is prima facie evidence that we’re looking at a pretty good college.
Prima facie evidence isn’t always perfect evidence. That isn’t a good reason for rejecting the evidence (https://en.wikipedia.org/wiki/Nirvana_fallacy). The University of Chicago’s jump of ~10 positions in ~10 years does call for an asterix or two on my claim:
Admission selectivity* is a fairly good** indicator of college quality.
- there is more than one way to define and measure “admission selectivity”;
** however you define/measure it, it’s still not a perfect indicator
(like other indicators, it can be confounded by factors that have nothing to do with what you want to measure)