Preview: Methodology Changes for 2014 Best Colleges Rankings

<p>Preview: Methodology Changes for 2014 Best Colleges Rankings</p>

<p>Preview:</a> Methodology Changes for 2014 Best Colleges Rankings - Morse Code: Inside the College Rankings (usnews.com)</p>

<p>Well US News is a publication trying to sell copies so of course they will mix up their methodology from time to time. </p>

<p>As always, take the US News rankings with a grain of salt. It’s a useful tool in the college search process but it’s not the Bible when it comes to colleges.</p>

<p>Do you think this could drastically change the top 20 national universities?</p>

<p>For personal relevance, one ought to be able to select & weight the factors that are important to them, and rank the schools accordingly. Also, the USNWR’s methodology did not mention career placement effectiveness. Perhaps it is not measured and reported in IPEDS??</p>

<p>If Macleans magazine (Canada) allows personalization of selection of Canadian universities, wonder why can’t USNWR provide the same for US schools. </p>

<p>[University</a> Rankings 2011 ? - Maclean’s On Campus](<a href=“http://tools.macleans.ca/ranking2013/selectindicators.aspx]University”>http://tools.macleans.ca/ranking2013/selectindicators.aspx)</p>

<p>Yay, more broken rankings that don’t measure the performance of the actual schools!</p>

<p>The graduation rate ranking is meaningless without knowing the context. Some schools accept having lower graduation rates by offering rigorous programs to relatively underprepared students (Purdue is a classic example of this). </p>

<p>These rankings need to be completely overhauled IMO. We need rankings that measure the quality of the schools, not of the incoming students. We need a system that is more dynamic and able to recognize lesser-known schools for performance gains since oftentimes these schools can’t control the caliber of students they admit.</p>

<p>I don’t understand US News’ “projected graduation rate” statistic and comparing the actual with the projected. How can you project graduation rates? Why not just use the actual stats that are given?</p>

<p>

</p>

<p>It’s well established that students from high-income households graduate at higher rates than students from lower-income households, in large part because lower-income students often need to interrupt or stretch out their education for financial reasons (not only inability to pay, but often the need to work to support other family members). It’s also well established that students with high SAT/ACT scores graduate at higher rates than students with lower SAT/ACT scores, because the latter are statistically more likely to have difficulty keeping up academically. </p>

<p>The idea behind US News’ “predicted graduation rate” is that a school whose entering class is disproportionately high-income and high-test-score students should be expected to have a higher graduation rate than a school whose entering class includes more low-income and/or lower-scoring kids. It’s an attempt to create some context in which to understand each school’s actual graduation rate: the school with the wealthier and more talented student body should be judged against its peer institutions, while the school that takes on more low-income and/or academically marginal kids should be judged in comparison with other schools similarly situated. And in particular, the latter school might actually be doing quite a good job in producing positive educational outcomes despite a lower absolute graduation rate, if it’s exceeding expectations based on the incomes and admissions stats of its students. Conversely, the high-income, high-stats school may be resting on the laurels of its student body rather than adding positive value if its nominally high absolute graduation rate is lower than might be expected based on who its students are.</p>

<p>That’s the theory. There are serious questions about how well US News actually measures that context. For example, its only indicator for income is the percentage of students receiving Pell grants–an extremely crude measure. And to measure the academic qualifications of entering students it combines SAT/ACT scores with the percentage of students in the top 25% of their HS class, the latter another crude measure for top 50-ish schools where virtually everyone was in the top 25% of their HS class (if the school ranks). Then they also throw in some other questionable factors like expenditures per student and whether the school is public or private–factors that could indeed affect graduation rates, but it’s not clear to me why a wealthy private school should be punished (by having a higher predicted graduation rate) for its ability to spend a lot per student if that enhances educational outcomes. One would think that’s part of what makes such a school more attractive and more successful. (Of course, elsewhere in its ranking methodology US News heavily rewards wealthy private schools for high expenditures per student independent of education outcomes, so this error probably only partially erases the much larger error of assuming lavish spending is a proxy for educational excellence). </p>

<p>After all is said and done, most top 50-ish schools come out with USNews-predicted graduation rates that are pretty close to their actual graduation rates, but there are outliers. For example, Penn State’s actual graduation rate of 87% in 2013 exceeded its predicted graduation rate by 17%. On the other side of the coin, Caltech underperformed its predicted graduation rate by 7 points, probably due to its extreme academic rigor. Georgia Tech underperformed its predicted graduation rate by 5 points, and Case Western underperformed its predicted graduation rate by 8 points. Notice these underperformers are all STEM-heavy schools with high percentages of students in demanding majors like engineering that traditionally have had higher washout rates than many other fields. This suggests an additional problem with the US News predicted graduation rate: insofar as it doesn’t control for field of study, it can produce highly misleading results.</p>

<p>Sounds like they are trying to elevate some school so that their ranking will more closely approximate the conventional wisdom. Any ideas which one or which ones they are thinking of?</p>

<p>Sounds like there has been a bit of a change in the ranking methodology, but it doesn’t change the overall idea of trying to determine for a student what there “best college” would be - essentially measuring what can’t really be measured.</p>

<p>

</p>

<p>I honestly believe this is what’s happening with the current rankings. </p>

<p>It is much easier for the “top” schools to maintain their spots in the rankings when they consistently attract top talent. Top talent looks to the rankings on which schools to go to. However, if we have rankings based heavily on incoming student stats (and other prestige factors), the rankings stagnate (which is what we’re seeing - very little movement)… top talent ends up going to top schools, further promoting those schools, making them more powerful and higher ranked. Meanwhile, the lower ranked schools end up attracting lower caliber students, thus lowering their spots in the rankings. </p>

<p>This is why we need rankings that are more dynamic, and allow for schools to improve and stay competitive. After all, the rankings are meant to rank the schools, not the backgrounds of the students attending the schools.</p>

<p>Check out the Washington Monthly rankings if you want to see predicted v. actual graduation rate comparisons in action. Click-sort on the “social mobility” section.<br>
[National</a> University Rankings 2013 | Washington Monthly](<a href=“http://www.washingtonmonthly.com/college_guide/rankings_2013/national_university_rank.php]National”>http://www.washingtonmonthly.com/college_guide/rankings_2013/national_university_rank.php)</p>

<p>Funny that NOW they’re deciding to make their rankings more output-based, just as Forbes and others purport to do.</p>

<p>And why is the measure of peer assessment being reduced only for regional schools and not national ones?</p>

<p>One thing about output based is that it is hard to measure. And depending on what % you assign to those factors will vastly change the rankings.</p>

<p>

</p>

<p>Some of us believe that the PA has value (at least with those national, research Unis).</p>

<p>More emphasis on SAT/ACT will likely hurt those colleges, such as UC campuses, which place more admissions weight on GPA.</p>

<p>I agree. </p>

<p>Peer assessments are quite valuable, IMO. They allow a school that has little to no control over the students it admits to improve their ranking by improving the quality of the school. When utilized correctly, peer assessments prevent the “tenured” ranking effect which often occurs with the objective ranking methodology (i.e. GPA, SAT scores, giving rate…) whereby elite schools will generally always admit high achieving students, forever securing their place in the rankings.</p>

<p>

</p>

<p>Because they’s already reduced the weight of peer assessment from 25% to 15% for national universities and national liberal arts colleges. I don’t recall exactly when that change was made, but it was sometime in last two or three years. They accomplished this by cutting the “academic reputation” category from 25% to 22.5% for national schools, and then reducing the peer assessment portion of “academic reputation” from 100% to 66.7% for those schools. The other 33.3% of “academic reputation” is now “high school counselors ratings.”</p>

<p>For regional schools they’re now cutting “academic reputation” from 25% to 22.5%, but that category is still 100% peer assessment. So even after this change, peer assessment still carries 50% more weight in the regional school rankings than in the national LAC and university rankings.</p>

<p>

Ironically, the magazine’s very first attempt at a poll was just that - a poll of different presidents and deans from around the country. And, of course, the criticism back then was that it favored the elite schools. It also didn’t help that the eastern schools tended to be clustered near the top. There’s certainly a lot of evidence to suggest that USNews, which was based in the Midwest, developed all of these bells and whistles to mollify their readership.</p>

<p>

</p>

<p>Really? I had always heard just the opposite. The very first US News college rnaking in 1983 was as follows:</p>

<p>National universities

  1. Stanford
  2. Harvard
  3. Yale
  4. Princeton
  5. UC Berkeley
  6. U Chicago
  7. Michigan
  8. Cornell
  9. U Illinois
  10. Dartmouth
  11. MIT
  12. Caltech
  13. Carnegie-Mellon
  14. Wisconsin</p>

<p>Honorable mention:
15. Brown
15. Columbia
15. Indiana U
15. UNC Chapel Hill
15. Rice</p>

<p>That’s 3 publics in the top 10 and 6 publics in the top 19. Also Midwestern schools were well represented at the top, with 3 Midwestern schools in the top 10. I had always heard that it was elite schools in the Northeast–and their fan base–that didn’t like the original ranking which was 100% peer assessment, and it was in response to their complaints that US News started adding input metrics like spending per student, faculty compensation, etc.</p>

<p>

</p>

<p>I don’t know where you get that US News was “based in the Midwest.” It was founded by journalist and publisher David Lawrence, a native Philadelphian and Princeton grad who spent most of his professional career in Washington, DC, and as far as I know US News’ editorial offices have always been in DC. Unless you mean their readership base was in the Midwest; that’s possible as it positioned itself as a more conservative alternative to Time and Newsweek, and that editorial posture may have played better in some parts of the Midwest. But probably less so in those population segments in the Midwest most inclined to care about such things as college rankings. And since subsequent changes to the ranking have tended to favor elite private schools, it seems more likely to me that the readership they were pandering to was largely based in the Northeast.</p>

<p>I read the comment below from a NYT article and it made me LOL… best analogy ever for the term “Best Colleges” as promoted by USNWR:</p>

<p>"Saying that there is such a thing as a ‘Best College’ is like saying there’s a ‘Best Surgical Procedure.’ It’s a ridiculous contest; there are plenty of great ones but an otherwise great one might not be ideal for you."</p>

<p>

</p>

<p>And the cynic in me says such a result won’t sell many magazines. Thus, the significant changes to the formula – which coincidentally favor the NE privates – and all is right with the world.</p>