<p>
[quote]
New research raises additional questions about the "reputational" survey that is worth 25 percent (more than any other factor) on the U.S. News & World Report rankings of colleges.</p>
<p>What the research found is that the reputational scores don't correlate with changes in factors such as resources or graduation rates, but correlate with the previous year's rankings. In other words, the way you get a good reputational score -- and in turn a good ranking -- is to already have a good ranking.</p>
<p>^ Proven, perhaps not, but it is indeed a big “duh”. I prefer to think of it as a negative feedback system that helps keeping rankings from jumping too much from year to year.</p>
<p>Are you talking about the Peer Assessment survey? That is one of the most useful factors in the US News rankings. About 90% of the Peer Assessment rating can be accounted for by hard data. There is a sound basis for the Peer Assessment ratings.</p>
<p>Not true. It’s superfluous if the same data is in the predictor twice. So this proves PA would be superfluous only if you would include the previous year’s ranking AND PA. What this says instead is that PA is measuring something which, correlated as it may be to other elements of the ranking, is mostly correlated with itself. This is a good thing-- it means PA is capturing something which is not found in other numbers. It also means PA is somewhat static, which is perfectly reasonable and expected since reputation is slower to change and is largely a lagging indicator relative to changing quality. </p>
<p>Seriously, this is not news and can be quite easily and reasonable spun to sound positive for PA, and I’m no big fan of the PA or it’s methodology.</p>
<p>Certainly you would expect PA to remain relatively constant. Why would you expect it to change much? It’s not as though colleges change dramatically year to year. They barely change decade to decade.</p>
<p>The correlation between PA and the SAT 25th percentile is very high, about +.74 among the top 100 universities in US News that report SAT. You can calculate it yourself using an Excel spreadsheet. Just enter the data from US News and do a Pearson correlation between the two columns of data. Seeing is believing. You don’t have to rely on anyone else’s research. Another way to say this is that selectivity is 50% of the PA rating (.74 squared).</p>
<p>Correlation is not causation. It’s somewhat obvious that better students tend to select what are thought to be better colleges who you would expect to have a higher PA. But there are exceptions too.</p>
<p>It is a good thing that PA does not change dramatically with random fluctuations in a few statistics. A good analogy is a college student’s overall cumulative gpa. It doesn’t change much as a result of one grade or one semester. It captures the big picture over time. The more history, the more it resists change. This is the way it should be. But a long stretch of poor performance WILL cause it to change.</p>
<p>So what is causing the identified changes in PA? It is not changes in key measures of quality, since there is no correlation. Since the correlation is the previous year’s ranking, the reader is left to decide if it’s a reasonable cause. The subject of the next research? ;)</p>