Ten Reasons to Ignore the U.S. News Rankings

<p>

</p>

<p>O.K., fair enough. But you know, I repeatedly asked you to describe your own approach. You did not comply (other than venturing some speculation about a hypothetical ELO for colleges). So I tried to read between the lines. Meanwhile, you repeatedly asked me to clarify my own positions, which I tried to do, even after I had conceded that the specific examples you asked me to address were beyond my experience. I tried to describe the limits of that experience (experience with both highly ranked and low-ranked colleges), and you not only responded then and there with what were mildly mocking remarks, but also continued in subsequent posts to characterize me as an elitist snob. No terrible harm done, but I think they missed the point of what I was trying to say. The point was not that I am some deep expert in this game, but only that my limited experience leads me to believe that wide ranking differences seem to correspond to meaningful differences in academic quality (which I qualify by saying may not be equally relevant to everyone, and that my experiences with some kinds of colleges may not apply very well to others.) I also admit, I don’t really know precisely what “wide” is (there is no “magic number”, as you characterized it). So due to the limits of my own knowledge, I like to consult the available tools and data. That data and those tools often have weaknesses, but that’s the game we’re in on the Internet as it exists today. Most of us don’t have time to do a journal search to buttress every debating point. We hazard an opinion based on whatever information is available, knowing that it is open to challenge on a public forum. Regarding #139, as far as I know my criticizing the advice you gave on another thread did not violate the TOS; I still think the example was germane, but if it came off as mocking I apologize. I think the example illustrates what can happen when we try to pull college recommendations out of an unconstrained search space. For good students, I generally limit the space to schools that would fall within the USNWR top 100 or 200 schools. That runs the risk of missing schools that are cheaper and closer to a student’s home, but chances are those schools are relatively familiar to the student or are easily visited. If you think there is a better approach (a practical one that can work today), by all means let’s hear it.</p>

<p>Anyone have the link to the old research article on cross admits? I can’t recall who wrote it or when.</p>

<p>^ Do you mean this one:
[A</a> Revealed Preference Ranking of U.S. Colleges and Universities](<a href=“http://www.nber.org/papers/w10803]A”>A Revealed Preference Ranking of U.S. Colleges and Universities | NBER)</p>

<p>It assigns “ELO points” to colleges. Note the rough similarity of the resulting set to the USNWR top 100 (not just in the top 10).</p>

<p>The difference between this article’s notion of “ELO points” and the chess-playing system of ELO points is duly noted. This system measures choices, which are not the same as victories in a game of skill.</p>

<p>Thanks. Thats the one. Has there been an update?</p>

<p>I don’t know for sure, but I doubt it.<br>
It was based on “hand-collected” data (so it may be hard to repeat annualy until someone sets up a more efficient process). 50topcolleges.com cites this study among its aggregated rankings; the link I gave you is the same one they post; presumably, they’d be motivated to post the latest version available.</p>

<p>There’s an article with a table that includes the HYPSM cross admits from the classes of 2012-14 but its in a blog so I can’t link it. The blogspot is called mathacle, if anyone wants to look at it.</p>

<p>All this arguing.
Some are transferring authority to USNews- I’d guess because its ranking exists and backed up by the “fact” that it has a purported methodology. A few suggest the methodology is arbitrary. Think about it. Why do they calculate x and 5% and y at a larger %? How do they control PA to assure the input is well-considered? How do they determine the roll-down effect of huge STEM grants onto non-STEM kids? How is what they do juried? Etc. </p>

<p>We’re all bewitched by claims something that defies universal definition can be defined with certainty. And then ranked. And then, that there really are shifts from year to year. And, in the end, who’s driving the Mercedes? </p>

<p>The ranking is only relevant when buyers/users ascribe that status to it. It’s not inherently some valuable or valid tool. Then, we fool ourselves further, by assuming our potential employers buy into it (even that they know it exists.) </p>

<p>Before the USNWR rankings (and still,) the college guides grouped based on selectivity- H was harder to get into than, say, NU. Ooooh, we all said, there must be some reason- they must run kids’ credentials through a tougher wringer, all those kids must be tippy top. Hence… And, both were more selective than that school down the street, so Local U must not be as great.</p>

<p>Now, it’s all muddled by the fact that so many great PhDs find jobs at smaller or less well-known schools, that majors have divided and subdivided and not all school specialize in what you want to study, and, among other problems, that dastardly little thing called holisitic review, which means not all top performng hs kids will get into the most selective schools- but, their presence IS felt wherever they land.</p>

<p>I’m just musing, here. But, I feel a twinge for kids who only look for some measure of prestige and miss finding the right fit for them.</p>

<p>A crude product like the USNWR college rankings is likely to be of value only to the low information prospective college students or the low information parents of prospective college students. Why would those who are sophisticated in analyzing and evaluating institutions of higher education spend much time on it?</p>

<p>@lookingforward: EXACTLY. Not to mention the issues of cost for many families in the middle. When people sit down and do an ROI analysis, it takes a leap of faith to decide that it’s “worth” spending more for a higher-ranked school. Who’s driving the Mercedes? Lots of people with a lot of different educational backgrounds.</p>

<p>Here’s your latest cross admit rankings folks:</p>

<p>[Parchment</a> College Rankings 2013 | Parchment - College admissions predictions.](<a href=“http://www.parchment.com/c/college/college-rankings.php?page=1&perPage=25&thisYear=2012]Parchment”>Parchment Student Choice College Rankings 2013 | Parchment - College admissions predictions.)</p>

<p>gb 8784</p>

<p>Parchment had already been presented above. One large problem with Parchment is it is a for fee service which I would presume only collects data from those who pay for their service. This presents a huge threat to generalizability. Do you think the families who pay SAT/ACT fees, college apps, college visits AND subscribe to Parchment are representative of all college bound families? You might try to categorize some differences for yourself.</p>

<p>lookingforward,</p>

<p>You summarize very well an excellent overview “big picture” critique. An even more distant overview might consider why the close scrutiny and seeming current displeasure with USNWR and the “reputation” factor at this time. There seems to be much more questioning of the alleged superiority of the “top” schools. </p>

<p>Perhaps this is paralleling societal changes. Whereas “elite” was once a revered status, it is now tinged with a darker side…elite banking–thing GS/JPM, elite SES status–think the ethics of how some got there. It would seem it is no longer sufficient to simply carry the reputation of an elite or top university, but the need is present to demonstrate superiority of day-to-day educational activities and a proportionally greater impact on measures of student learning. The other question you address is, if some school is “better”, better for what subgroup of students (STEM/humanities/aspiring 1%ers/scholars)?</p>

<p>

</p>

<p>I actually think this is the biggest threat to the perception of the Ivies, especially those that routinely send a quarter to a half of their graduates to Wall Street. The current national dialogue is definitely drawing attention to issues of business ethics and responsibility that were formerly unfamiliar to most Americans. This is NOT to say that students who aspire to Ivy League schools are inherently unethical. But when families are looking at potential outcomes for their kids based on their college choices, presumably the “what do you want to do with your degree?” question is an important consideration–especially when they are looking at spending $200K or more on their children’s undergraduate education.</p>

<p>I have a vague idea of how one might do the 5%X, 15%Y voodoo … but have never seen any documentation on it. I suspect they start by identifying a subset of schools they think are “best”. They then hypothesize a first-pass set of features (class size, salaries, etc.) they think measurably describe the goodness of those schools. They experiment with many combinations of weightings until they arrive at one that appears to generate a plausible ranking, one that roughly corresponds to their concept of Ground Truth. They add/subtract features (graduation rates, etc) and re-adjust weightings, in a long game of whack-a-mole, until they arrive at a stable set of features and weights (one that continues to output a plausible ranking as you introduce new schools into the input set). Something like that … but maybe somebody here has better insights into the process.</p>

<p>

</p>

<p>I realize I might have been too subtle when describing Parchment as unreliable. So, allow me to correct that error. Parchment is combining data that are unadulterated horse manure with a glossy presentation. Actually, they have no comparative data; they just pretend to have them through the “collective” power of their database. </p>

<p>Here’s what anyone can do. There is a school that has lifted the veil about THEIR cross-admits, namely Stanford. Dean Shaw presented the results of Stanford through presentations to the Stanford Senate. We also can find the actual yield numbers through the published Common Data Set of Stanford. Now, let’s look at what those Parchment boys come up with:</p>

<p>Stanford
Reported Results 2151
Accepted 387 (18%)<br>
Rejected 1764
Attending 150
Yield Rate 39% <<<< Compare that to the real yield!</p>

<p>In their college matchups, Parchment reports numbers such as Stanford versus UC Santa Barbara at 77/23 and UC Berkeley 86/14! </p>

<p>From the reports by Dean Shaw, we know that in the list of schools “grabbing” cross-admits, UC Berkeley does not come in very high (below 20th) and that a calculation of the yield losses intimates that between 6 and 12 students enroll at Cal after rejecting Stanford’s offer. </p>

<p>The conclusion is that the numbers used by Parchment are based on biased and partial samples, and are no more valid than similar exercises presented on College Confidential. In other words, the same garbage!</p>

<p>

</p>

<p>While true, one might also want to balance that with the numbers of Ivy League graduates who join non-profit organizations and apply to Teach for America or the Peace Corps. Of course, a cynic might opine it is due to the deteriorating conditions of the past years! </p>

<p>[When</a> Ivy Grads Pick Teaching Over Wall Street: Cohan - Bloomberg](<a href=“Bloomberg - Are you a robot?”>Bloomberg - Are you a robot?)</p>

<p>One might also note that the Ivy League offers considerably fewer “business” degrees than most schools, in absolute numbers and percentages. Are elite schools more guilty than others in responding to the DESIRES of their students? Are students in general (and their parents) less inclined to consider an elite education the best path to a “rich” lifestyle? </p>

<p>How many students are there who would gladly forego the chance to rub elbows with the Wall Street gurus or other financial outfits … if it were available?</p>

<p>It can’t be garbage if they can get people to pay a subscription fee for their service, can it? :)</p>

<p>When Ivy Grads Pick Teaching Over Wall Street: Cohan - Bloomberg</p>

<p>The next article appearing in 5-10 years will read, "Why _______(fill in any TechStartupU) Grads Pick Teaching Over Tech Startups. This both due to widget saturation and a culture emerging in flagship established tech companies which in many ways parallels Wall Street (slick tax avoidance maneuvers, privacy violations, product Peter Principle).</p>

<p>Just my opinion…surely not to please some.</p>

<p>

</p>

<p>Although I could answer that plenty of people pay for garbage, as long as the “information” espouses their own views or addresses their insecurities, but I prefer to say that I was weighing in on the cross-admit “data” they show on their website. And that part responds perfectly to the adage … Garbage In - Garbage Out.</p>

<p>PS I did not notice the smiley!</p>

<p>xiggi,</p>

<p>“It can’t be garbage if they can get people to pay a subscription fee for their service, can it”?</p>

<p>Yes, the post was in jest, but to support the point of a non-representative sample. The cross admit data would have to be derived from the Parchment subscribers who likely differ from the entire universe of college bound students–a point you made as well.</p>