<p>Harv Pton MIT are 4.9, CalTech is 4.7, Duke is 4.6, I think Dartmouth is 4.4 (not sure though)</p>
<p>I'm wondering, most people said peer assessment is usually stable from year to year and is the most accurate number in college rankings...yet they all changed this year</p>
<p>Check the peer assessment for the Top 120. Michigan, Brown, and Rice are not alone.</p>
<p>Let's just look at the top 40. 10 of them went down in reputation from one year to the next: Penn, Brown, Rice, Vandy, CMU, Michigan, Lehigh, Brandeis, Case Western, and Boston College. That's 25% of them. 6 of them experienced a decline in overall ranking (I am sure the new peer rating played a role).</p>
<p>In contast, of those top 40 only 1 (UC San Diego) went UP in peer assessment.</p>
<p>Peer assessment has often been associated with grad school reputation, and for good reason, they make headlines on a regular basis, especially in education journals. </p>
<p>Neither Rice, Brown, nor Dartmouth are known for, or even emphasize, grad students or grad schools, and that being the case they are less likely to benefit from splashy coverage of the grad schools they dont emphasize or have.</p>
<p>Harvard, Penn, Stanford and Duke are known for their grad schools as well as their undergrad schools, some are even thought of as Grad school destinations to a more substantial degree than they are undergrad destinations. </p>
<p>Michigan is another matter; hard to know what that's all about.</p>
<p>That's normal. Brown has always hovered between 4.4 and 4.5. Michigan has always hovered between 4.5 and 4.6. Penn has hovered between 4.4 and 4.6. Duke between 4.5 and 4.6. Cal and Caltech between 4.7 and 4.8. Columbia and Chicago between 4.6 and 4.7. It all depends on the year. Besides, the peer assessment score is not an exact measure, it is more of a rough estimate. There is almost no difference between a 4.0 and a 4.3 or between a 4.4 and a 4.7.</p>
<p>
[quote]
Besides, the peer assessment score is not an exact measure, it is more of a rough estimate. There is almost no difference between a 4.0 and a 4.3 or between a 4.4 and a 4.7.
[/quote]
</p>
<p>That may be the property that this measure is supposed to have (I'm not sure I agree, but for the sake of argument I'll accept that assertion). However, if this rough estimate where colleges "hover" around a group of scores were working properly, we'd see movement both up and down from year to year. This year, most movement has been down. In other words, even if that's the way peer ratings work, something is broke this year.</p>
<p>If it is such a "rough estimate" is it in any way appropriate to weigh it as a 25% of any survey that is not specifically measuring rough popular estimates?</p>
<p>It is easy to criticize other criteria that USN usesit is Byzantine to mebut this one does not even have the pretense or veneer of objectivity or usefulness.</p>
<p>But FountainSiren, it is the only criteria that attempts to measure academic quality. Graduation rate measures intensity of education, alumni giving rate measures the ability of a university to approach alums to get them to give money etc... At least the peer assessment score, which admitedly has its own flaws, targets the all important question of academic reputation and quality.</p>
<p>not including presidential elections, when would you find a survey in which half of those surveyed do not even respond to be valid, or even usefulmany, it seems are now refusing to respond for ethical reasons: its a mutiny!</p>
<p>Fountain, in fact many, many surveys get low response rates, and researchers have to live with it. Many studies have been published with data that has "less than half" of those surveyed responding. Of course, one has to do reasonable checks to give confidence that the respondents are reasonably representative of the group surveys.</p>
<p>I think response rates are a concern, particularly when they've continued to drop and when the response data seems to be unstable. However, it isn't that "less than 50%" is itself a problem. Also, I thought it was 57% response rate.</p>
<p>While the return of the surveys is pretty pathetic, the issue is really about who does the actual replying and why the person polled is supposed to be an "expert".</p>
<ol>
<li><p>Knowledge
Are we kidding ourselves to believe that the Dean or Provost of Grinnell knows aenough about both Swarthmore and Sewanee to fill the survey with recent knowledge? </p></li>
<li><p>Source of information
Cannot we not assume that the best source for information for the "experts" is simply to read last year issue of the report? The peer assessment becomes is a self-fulfilling prophecy. Nobody wants to be a fool, so why not err on the safe side? </p></li>
<li><p>Integrity
It is a known fact that the survey has been manipulated and is marred by the most abject geographical cronyism. Would it surprise anyone that all Seven Sisters schools give one another a full five and make sure to give low grades to competing schools? How else, could you explain some of the ridiculously high peer rankings? </p></li>
<li><p>Identity of person filling the form
How many deans or provosts abdicate this exercise to an obscure secretary o r an intern? Considering how valuable the time of academics truly is, one has to wonder how important the survey is. </p></li>
</ol>
<p>There IS an easy solution to all of this: make the survey public on a website. This way the information would be easily verifiable by ALL. After all, schools should not be afraid of having their "opinions" scrutinized and verified for accuracy and integrity ... unless the data is better kept secret for reasons that are not hard to guess.</p>