What does USNEWS peer-accessment score really measure?

<p>I often use the Peer Assessment score to tell which colleges are overrated or underrated. I know it's not really the best way to tell, but the general trend <em>should</em> be highest PA score to lowest. When a lot of colleges don't fit the trend (I mean by a lot), I think it sort of says something. Like Berkeley, which I believe should be at the top. And WUSTL, which I think should be lower. Yeah, it's a wrong way to interpret rankings, but I think the PA score reveals something about a school's reputation.</p>

<p>The Peer Assessment survey is explicitly for undergraduate programs only. There are several different surveys; each college receives a survey that lists its peer institutions according to US News category. The survey explicitly asks for a rating of academic quality based on quality of faculty, record of scholarship, quality of curriculum, and quality of graduates. If the President or Provost or Director of Admissions is unfamiliar with a school, they are asked to mark "don't know". The ratings go from 1 (marginal) to 5 (distinguished). There is an option to nominate exemplary undergraduate programs in specialty areas (such as first-year experience). It appears that colleges are able to rate themselves. There may be 100-300 colleges in the list, roughly.</p>

<p>The name recognization at the time when those folks filling form went to school. It is not even more updated than a bond movie.</p>

<p>"The name recognization at the time when those folks filling form went to school. It is not even more updated than a bond movie."</p>

<p>even if this is true, it also points us to something else that we can probably take as a fact: colleges don't really change year to year and that change at each college takes a lot of time. that is the exact opposite of the US News overall rankings in which colleges can jump literally 10 spots one year then fall 10 spots two or three years later and so on. hence, a lot of the peer assessment scores have stayed roughly the same over time. </p>

<p>and remember, a lot of the recruiters for jobs also went to college around the same time (or perhaps a little later) as the ones who are doing the peer assessment scores.</p>

<p>I do think that small schools that are not close to major cities suffer in the peer assesment ratings. Mini has flagged one. . .Whitman, and there are others. (I almost always add a couple of tenths for midwest LACs and western unis.) </p>

<p>However, the folks that do these ratings have very strong professional networks. They pay attention to what is going on at peer schools, particularly in their areas of expertise, and often visit other campuses for conferences, benchmarking, departmental evaluations and <grin> job interviews. Alexandre makes an excellent point about not differentiating between school strength based on 2/3 tenths, but I think the peer assessment is a hell of a lot more meaningful than the overall USNWR ranking and a useful data point.</grin></p>

<p>Agree with you PA score changes slowly. Ever heard of reputation lags behind reality? </p>

<p>But perception is not, and should not, the whole story of ranking. Colleges do change year to year. One year of bad recruitment, or poor placement, or faculty hiring, or facillity advancement, and you should fall or climb. As for the drastic movement you mentioned, it should be just some exception, wont happen often. But you never know, because sometime schools are close in almost everything, a little drop will get you behind many of your peers.</p>

<p>The points are, don't take ranking literally as everyone agreed, and don't blame ranking for not being consistent with peer assessment.</p>

<p>The folks you mentioned don't have enough knowledge to do the ranking for even 20 colleges, let alone top 50, 100.</p>

<p>But they are not required to know enough, the poller can sample large enough space to correct the error in individual perception. There comes the problem: who you sample, how many you sample, how you sample, ... You know what I mean, this peer assessment is just partial reflection of a school's reputation, it is regional, scoped, non-scientific, but useful somehow depending what you are looking at.</p>

<p>Alwaysthere, universities doe not change year-to-year. They barely change over the course of a decade. How can a university with 1,000+ professors and 10,000+ students change year-to-year? Schools like Harvard, Penn, Cornell, Columbia and Johns Hopkins have faculties of 2,000+ and over 20,000 students. Schools like Michigan and Cal have over 3,000 profs and over 30,000 students. TRhose schools do not change overnight.</p>

<p>I know that SAT scores and GPAs are going up nationwide
acceptances to the very top schools are getting much more competitve and students who formerly may have been going to Pomona or Northwestern are now going to Washington State or Missoula.
I wonder how much this affects the day to day life?
Do profs still grade on a curve as they do in my daughters college?
Is the bar raised or are more people stepping over it?</p>

<p>We may define the change differenctly. </p>

<p>But let's see, first a school may have 2~5% faculty change, retirement, new recruitment, leave, not teaching, very often actually, maybe upward 10%.</p>

<p>Second, student body change 20%, if you count in graduate school, at least 30%.</p>

<p>Third, research grant changes, sometime drastically. Courses change. Job market changes so the placement change.</p>

<p>The change is there, sometime not measurable, sometime not perceptible, but data can show you one way or another. Considering schools in the same class are REALLY close in quality, several spot shift in ranking is not something unusual.</p>

<p>I think things are more volatile within the top 10, and within the top 25, but its rare that a school goes from the top 10 to the bottom 15 of the top 25 etc. - I think that the peer assessment scores are a pretty good measure of the range of the score, because unlike the rankings, it doesn't change drastically for similar institutions (i.e Columbia and Penn having similar ratings but being farther apart)</p>

<p>It is just a reflection of what the academic deans of other schools think about a particular school. Clearly...a subjective component!</p>

<p>Alexandre, do you agree that the Peer Assessments are given too much credit in proportion to other criteria in the USN&WR rankings? IMHO, it should make up about 15% of the score since personal prejudice and other intangibles can be strong influences. Because the other numbers can be manipulated by the colleges and universities, more concrete categories such as Average Freshman Retention Rate and Acceptance Rate should be given greater credence or heft. Your thoughts.</p>

<p>Actually, I think that the Peer Assessment score should count for 50% of the total ranking, with 25% going to resources (class size, faculty availlability, spending and endowment/student etc...) and 25% to the quality of the student body. </p>

<p>People do not understand this, but the peer assessment score is actually quite accurate and very telling. I do believe that the peer assessment score should be more seriously regulated, but it is the best indicator of academic excellence. </p>

<p>Retention rates and alumni donation rates are telling indicators, but they do not measure quality of education.</p>

<p>Don't you think it's a question of who judges the judges, with regard to Peer Assessment?</p>

<p>who judges the judges?
You mean the college board of trustees?</p>

<p>Good point Emeraldkitty!</p>

<p>Whatever its merits, peer assessment is the most biased/subjective calculation of any of the ratings systems. Moreover, I think it is the most subject to regurgitation; a fashion-show/beauty contest judged by the contestants themselves: I wonder how many administrators judge their offices to be inferior? </p>

<p>Perhaps they are all somewhat self-hating and therefore unbiased, but my guess is that they are, fore-the-most-part, egotists incapable of any sustained self-critique or critique as such; or do they tend toward humility and self-abnegation?</p>

<p>Put down the thesaurus.</p>

<p>But Barron, without it how will I ever rise to the level of witty quips, poignant observations and grammatical dexterity, eloquently displayed in your curt and droll remark,
"put down the thesaurus.”</p>

<p>Certainly I should take advantage of all the help I can get, simply to be able to keep up. Yikes.</p>

<p>Still, thanks for the advice…or decree, depending on how you meant it (you must be the BIG-STRONG type…it’s so cute and-- domineering!)</p>

<p>I agree that the peer assessment must be well regulated. But most universities are accurately rated. I can only think of a handful of instances where a university is truly underrated. If you consider that a university probably gets rated by over 100 deans and scholars, even if 10 or 15 are biased one way or the other, 100+ participants are probably giving fair and accurate ratings.</p>