<p>Certainly there’s that, but sometimes you get schools with similar selectivity but sharply differing graduation rates. Just to pick on a school I know very little about, most top 40-ish LACs have 6-year graduation rates in the vicinity of 85 to 90 percent, some higher, but Bard’s rate is 77%. According to US News, that’s 8 points lower than its “expected” rate based on its selectivity. To me, that’s a red flag. Maybe there’s a perfectly good reason for it, but I’d want to know what that reason is before one of my kids applied there.</p>
<p>Or to use an example a little closer to home, the University of Wisconsin and the University of Minnesota have very similar selectivity–both right around 50% admit rates, Wisconsin with middle 50% ACT scores of 26-30 and Minnesota at 25-30. Yet Wisconsin’s 6-year grad rate of 83%, while not stellar, is a full 13 points above Minnesota’s. Now I happen to know something about the situation at Minnesota. The administration there acknowledges it’s a problem and is working hard to fix it, and things are slowly moving in the right direction, but they’re nowhere near where they need to be. And for many people in Minnesota, that’s a reason for caution–especially when under tuition reciprocity we can send our kids to Wisconsin for the same price.</p>
<p>He also singles out graduation rate as a metric (so it’s certainly not just “lower-tier” colleges who have an issue here, as collegehelp implies). I’d argue that it’s USNWR, not colleges, who’s in favor of “smoke and mirrors.” Colleges want students to draw their own conclusions from raw data made available either by the college or third-party sources. USNWR wants to throw all the data into a blender and hit puree - that’s pretty much the definition of “smoke and mirrors.” And on top of that, they obscure the raw data unless you pay for it; of course this is part of the business model, but most of that data is available from other sources anyway, so they’re intentionally being unhelpful to students. Most importantly they fake transparency by detailing the weights of their criteria, but this gives only a bird’s-eye view of the methodology; as always, the devil’s in the details (for example, they don’t say exactly how “expected graduation rate” is calculated). It’s absolutely absurd to assert that USNWR is looking out for consumers while colleges aren’t. This quote sums up the difference well:</p>
<p>However, different colleges offer different types of data to the general public. Colleges may also change what data they offer over time, for their own reasons.</p>
<p>bclintonk- One reason to consider the US News ranking rather than the graduation rate alone is that graduation rate accounts for 81% of the “meaning” of the US News ranking but 19% of the “meaning” is accounted for by other factors. The ranking tries to account for these other factors as well as grad rate. As you pointed out, grad rates are affected by extraneous factors like engineering programs and co-op. Sometimes schools publish suspect grad rates which may or may not adhere to the guidelines from the USDOE like the Penn State grad rate which exceeds expectations by +17 percent.</p>
<p>phantasmagoric- The US News “expected” graduation rate is the result of a mathematical procedure known as multiple regression analysis which is done separately for public and private schools and is re-done every year. It takes several factors into account at once to calculate “expected” graduation rate such as SATs, % of freshmen in top 10% of HS class, academic expenditures per student, and so on. US News uses highly skilled mathematicians (statisticians).</p>
<p>I also want to point out that US News allows for tied ranks so they actually have “tiers” of a sort in their ranking.</p>
<p>Is conventional wisdom necessarily wrong?
Data modeling uses a manageable number of measurable features to replicate human judgements about various things (the grammaticality of a sentence, the quality of life of a city, the excellence of a college, etc.). If the model can consistently replicate expert judgments (or public opinion) based only on a small set of features, that isn’t “trimming the data” in a bad sense. It would be “trimming the data” in a bad way if you put a thumb on the scale to get a particular outcome, one that only some experts (or subset of the whole population) favors. If USNWR is doing that, then that would be a flaw in its implementation (but not necessarily a flaw in the data modeling concept). </p>
<p>Why bother with a “model” if you can just look up the specific measurements you care about? If you think you have good insights into what measurements are most important and what implications to draw from them, fine. Many students and parents don’t (not when they are just starting the process, anyway.)</p>
<p>Using multiple regression, I was able to account for 96% of the “meaning” of the US News ranking with only two factors: graduation rate and SAT CR 75th percentile. The Multiple R was .98. This was based on 94 universities and LACs ranked 75 or better and with SATs above a certain criterion (SAT math 25th above 600). I excluded 4 outliers.</p>
<p>The US News ranking can be said to primarily capture SATs and grad rate. That’s what the ranking “means”.</p>
<p>^^^ One interpretation: SAT scores and graduation rates indicate student ability and motivation; the most capable & motivated students have the most freedom of choice (everybody wants them); they tend to gravitate toward colleges perceived (rightly or wrongly) to be the “best”. The burden of proof is on the critics to demonstrate their choices are not well-informed and rational.</p>
<p>Another interpretation: high-ranking schools are the ones that cherry-pick the best students then don’t do anything to hold them back. So the list of top schools could include some that have many big classes, indifferent professors, strong grade-inflation, and lax requirements.</p>
<p>The USNWR rankings are problematic and difficult to interpret because of the mechanical properties of the ranking formular (estimation equation) as OP indicated. This kind of problem become more serious in world university rankings (graduate school rankings) than in college rankings. Leaving the reputation index out of the discussion, slight changes in weights (estimates of determining factors) or addition/substraction of one variable may result in different rankings. Thus, we need to resist tyranny of numbers. </p>
<p>However, it is also true that the rankings are very useful when interpreted with your critical thinking and problem-solving capability regarding your college choice. For example, you can construct your own rankings by combining the USNWR ones (with some weights) and other factors reflecting your fit and preferences (with other weights). Or you can construct your preference rankings easily just by changing weights and/or by adding/substracting some factors in the ranking formular. </p>
<p>My point is that your critical thinking and problem-solving capability matters for the effectiveness and usefulness of the rankings.</p>
<p>Sounds plausible. But if that’s the case, then why bother with all the other claptrap of the US News ranking? It’s pseudo-science, essentially dressing up a measure of selectivity, as modified by grad rate, into a complicated multi-factor formula that isn’t doing any real work.</p>
<p>And if ucbalumnus is right, grad rate should track selectivity anyway. If it doesn’t, it suggests the school may be doing something unusually right or unusually wrong. A simple ranking based on your two factors, selectivity and grad rate, coupled with parallel rankings of each factor, would be truly informative, indicating which schools were overperforming and which were underperforming in graduating their students relative to their selectivity. </p>
<p>Who cares what their alumni giving rate is? There can be all sorts of reasons school A has a higher alumni giving rate than school B, such as that school A has a more efficient system for tracking its alumni and more effective telemarketing and direct mailing schemes, or simply that school A spends more time and money on soliciting small alumni donations so as to jack up its alumni giving rate and boost its standing in US News. None of which has anything to do with the quality of education or levels of alumni satisfaction. And why create a metric that gives schools an incentive to spend lavishly so as to jack up their “financial resources” score? This is all just extraneous nonsense–garbage that gets in the way of meaningful numbers.</p>
<p>It also disfavors public universities where students feel like they’ve already contributed to the university through taxes (and will continue doing so if they remain in their state) and hence don’t owe their university any more money.</p>
<p>Yeah, it is. It’s inherently conservative; it tends to penalize innovation. No school wants to be the outlier when the stakes are such that they might have an effect on their US News ranking.</p>
<p>I’ve been told by more than one trusted source, the same anecdote involving a top ten LAC which back in the late 80s-early nineties, complained to Mel Elfin, the rankings first chief, that it was being penalized in the rankings for its robust recruitment of inner-city kids who ostensibly had lower scores than their upper-middle-class counterparts. The suggestion was that the magazine introduce a new index for diversity. Elfin’s reply, which does not sound apocryphal, was to the effect, that “Nobody cares about diversity.”</p>
<p>I think an argument could be made that in 1990, the conventional wisdom was that “Nobody cares about diversity.” It was probably another ten years before the magazine introduced the concept of expected graduation rates as a kind of of sop to the social engineers among them. But, imagine the chilling effect Elfin’s decision must have had in the the meantime - and may still have - as the nation debates how best to capture the tens of thousands of gifted low-income students who spend the first two years of university in community college because that’s all they can afford.</p>
<p>It also can be, and has been, successfully gamed. Colleges seeking to boost their giving rate put a full court press on all their alumni to give something, even if it’s just one dollar, to the big campaign. When measuring giving rates a donation of one dollar counts the same as a donation of a million dollars. Princeton did this some years ago. I don’t know whether they still do.</p>