Peer Assesment Rank

<p>
[quote]
Do you have any factual proof to the contrary or just those unfounded opinions that top researchers cannot be good teachers?

[/quote]
</p>

<p>no, but there is always the fact that at most top schools, the researchers aren't teaching, and thus it matters little to undergrads whether or not their school developed a new kind of velcro.</p>

<p><a href="http://www.marco-learningsystems.com/pages/kline/prof/profchap5.html%5B/url%5D"&gt;http://www.marco-learningsystems.com/pages/kline/prof/profchap5.html&lt;/a>
<a href="http://www.getuponline.org/casualization/casualization_chronicle.htm%5B/url%5D"&gt;http://www.getuponline.org/casualization/casualization_chronicle.htm&lt;/a>
<a href="http://www.yalealumnimagazine.com/issues/99_07/GESO.html%5B/url%5D"&gt;http://www.yalealumnimagazine.com/issues/99_07/GESO.html&lt;/a&gt;&lt;/p>

<p>penn: 40% of classes taught by full-time professors
yale: 30% taught by full-time professors</p>

<p>barrons,
You already know that I think that the PA is of very little value to most students and has no place in the college rankings system, but I want to give you a chance to defend your position. How would you suggest that a student looking for a college use the PA scores if he/she is not interested in the technical fields and/or performing research while in college?</p>

<p>The top schools on the peer assessment list look pretty good (this is probably how the world would rank the US schools):</p>

<ol>
<li>Harvard University 4.9</li>
<li>Massachusetts Institute of Technology 4.9</li>
<li>Princeton University 4.9</li>
<li>Stanford University 4.9</li>
<li><p>Yale University 4.9</p></li>
<li><p>California Institute of Technology 4.7</p></li>
<li><p>University of California-Berkeley 4.7</p></li>
<li><p>University of Chicago 4.7</p></li>
<li><p>Columbia University 4.6</p></li>
<li><p>Cornell University 4.6</p></li>
<li><p>Johns Hopkins University 4.6</p></li>
<li><p>Duke University 4.5</p></li>
<li><p>University of Michigan-Ann Arbor 4.5</p></li>
<li><p>University of Pennsylvania 4.5</p></li>
</ol>

<p>Norcalguy,</p>

<p>Who is "the world?"</p>

<p>
[quote]
Absolutely. You bet on the wrong horse.

[/quote]
</p>

<p>Nope, Barrons! One needs of modicum of Critical Reading aptitude to be able to recognize the "right" horse or the ... use of sarcasm.</p>

<p>
[quote]
Who is "the world?"

[/quote]
</p>

<p>Norcalguy.</p>

<p>"this is probably how the world would rank the US schools"</p>

<p>and being unable to locate half of them on a map of the US. Asking the "world" to tank the best US undergraduate business schools would probably yield the same exact list. :)</p>

<p>
[quote=]
peer assessment is really reflective more of a schools grad prestige than undergrad. Rice,Georgetown, Tufts, William & Mary, Wake Forest are all underrated compared to some of the larger State U's that focus mainly on Grad students.

[/quote]
</p>

<p>
[quote=]
peer assessment is a manipulable, arbitrary conferral of generalized impression based on the vagaries of hearsay and speculation within the industry</p>

<p>and, as already noted, has everything to do with research output and prominent graduate faculty, factors which would be hard-pressed to have less to do with undergraduate experience.

[/quote]
</p>

<p>It's much more complicated than to say the PA score is really only a measure of Grad school prestige or faculty quality. I agree it's subjective, but it appears to be a subjective meshing of research/faculty prestige PLUS undergraduate quality. The proof? The scores themselves don't quite make sense if it were truly only a faculty quality measure. The two obvious standouts are Berkeley and Michigan. Berkeley, especially, would be in the 4.9 group if it were truly only a grad school ranking. As it is, it appears to gets an arbitrary "punishment" ding for undergrad to account for the 4.7. Another example is Wisconsin and UTexas-Austin. Wisconsin is only rated at 4.2 and UT-Austin is in the same 4.1 group with Georgetown, Rice, Vanderbilt, etc., schools they HANDILY beat in terms of graduate programs and faculty strength. Comparing Rice vs. UT-Austin, UT is ranked higher in just about EVERY academic program they both share and by quite a high margin in most cases. Not to mention, it has many more highly ranked programs to begin with and across a much broader academic spectrum. So this is an example of Wisconsin and UT-Austin getting some sort of subjective nudge down due to their admittedly less selective undergraduate program (in the case of UT, it's required by state law to be at least 90+% in-state at the undergrad level). There are also examples of schools getting a boost BECAUSE of their undergrad strength - UVA, Brown, and Dartmouth should not be ranked where they are by this measure (or certainly not over UT, Wisconsin, and UCLA!!) if it is truly a "research" measure only. So, while it may be true the PA is indeed subjectively biased toward strong research/grad schools, there are clearly corrections made for the quality of the undergraduate college.</p>

<p>Xiggi, perhaps I did not explain myself properly. I never said the Peer Assessment score measures quality of undergraduate education. I have in fact always said that such a thing cannot be measured. Education, particularly at the university level, is a highly personal undertaking and it varries from individual to individual. I do, however, believe that the Peer Assessment score measures perceived quality of undergraduate institutions (not education) based on the strengths of their academic departments, the quality of their faculties and facilities, ties to academe, research and industry and the wealth of resources. How good an education one gets, on the other hand, depends almost entirely on that person and how much effort they put into their education.</p>

<p>Xiggi, I think you've gone overboard in defending your position. What the "peer assessment" measures is really quite clear: it's "reputation." Simple as that. Dedication to teaching - offered at one point as an example of what might factor into a school's reputation, is not part of the definition of that term, merely (and expressly) an example of one factor which might affect a school's reputation.</p>

<p>Like it or not, reputation is important to people. And I agree with Norcalguy (being a "norcal" guy myself) the list of top PA schools strikes me as a pretty accurate read of how the generally knowledgible (but not CC obsessed) public would rank these schools. Those probably are the schools with the top academic reputations in the country, or near enough.</p>

<p>^^^Right. The peer assessment itself is not a measure of undergraduate quality or quality of teaching but simply reputation, one measure that might go into one's consideration when choosing an undergrad just like SAT scores or alumni giving rate or graduation rate.</p>

<p>
[quote]
Xiggi, I think you've gone overboard in defending your position. What the "peer assessment" measures is really quite clear: it's "reputation." Simple as that.

[/quote]
</p>

<p>Kluge, rather than worrying about the definition of the peer assessment, why not spending some time making up your mind! Why did you bother writing an entire paragraph about the elusive "quality of education" if the key to understanding the value of the peer assessment was confined to its measurement of "reputation?" Funny how the "key" word was not even mentioned! </p>

<p>Lapsus linguae or lapsus calami? </p>

<p>
[quote]
The thing is, most of the factors included in the ranking aren't directly related to the quality of eduction at respective colleges anyway. At least peer assessment, while subjective, is directly related to the purpose of the rankings. I'd argue that student selectivity is a valid predictor of academic quality<a href="although%20I'd%20quibble%20with" title="acceptance rate.">/B</a> But "Alumni giving?" "Class sizes?" Just how directly related to **academic quality are those things? And are things like graduation rate and retention even remotely valid distinctions between top 20, or even top 50 schools? "Financial resources" includes money spent on research, which may or may not be relevant to undergraduate education, etc. A lot of the factors included are present based on the assumption that they will - indirectly - affect overall academic quality. But peer assessment is a direct opinion of the success of the school in actually . Imperfect, but valuable in its own right.

[/quote]
</p>

<p>
[quote]
Xiggi, perhaps I did not explain myself properly. I never said the Peer Assessment score measures quality of undergraduate education. I have in fact always said that such a thing cannot be measured. Education, particularly at the university level, is a highly personal undertaking and it varries from individual to individual. I do, however, believe that the Peer Assessment score measures perceived quality of undergraduate institutions (not education) based on the strengths of their academic departments, the quality of their faculties and facilities, ties to academe, research and industry and the wealth of resources. How good an education one gets, on the other hand, depends almost entirely on that person and how much effort they put into their education.

[/quote]
</p>

<p>Alexandre, I can't disagree with your points., especially about the term "perceived quality." This allows for the perception and the reality of the strengths of their academic departments, the quality of their faculties and facilities, ties to academe, research and industry and the wealth of resources not to have to be the same thing! </p>

<p>The results of the PA are an unmistakable sign that perception is in the eye of the beholder. A perception that could be corrected with a bit of attention to the data that starts after the second column of the rankings. But that is obviously not the objective of the surveyor or the surveyees. Didn't Morse recognize that the objective of the PA is simply ... to level the playing field and boost the rankings of the large public research schools?</p>

<p>But what do I know? Since presidents of schools who are asked to complete the PA survey do seem to know better, why not listen to this voice: "Moravian College, founded in 1742, one of America's oldest and most respected liberal arts colleges, feels the use of this highly subjective and highly manipulated instrument undermines the college selection process and does not contribute to the common good," said Christopher M. Thomforde, president. "We agree with the criticisms that this survey provides inaccurate information and *distorts perceptions of the quality of instruction found at America's colleges and universities." *</p>

<p>Joshua, </p>

<p>Only 58% of these presidents and deans respond to the survey. And they themselves have said they have no idea about other schools, especially their undergrads (at least a few, as posted on this thread). And of course, they are from over a 100 schools, not just the top schools in America. A dean from a 100th ranked school has as much weight as a dean from a top 10 school. </p>

<p>Don't make boisterous claims without following the thread and paying attention to the facts.</p>

<p>Xiggi, what kind of' "reputation" did you think I was talking about? A reputation for fine architecture? Water quality? Football prowess? Their student's good looks? The "reputation" of these academic institutions in the context of "peer assessment" is their reputation for academic excellence. You can argue methodology, significance, accuracy, even deep dark conspiracies if you like, but the intended focus of this factor - a university's reputation for academic excellence - isn't really a tough question to figure out. This is simple - so simple I shouldn't even have to write this. You're really stretching to make your point, for no reason I can understand.</p>

<p>Though i've posted this on previous threads discussing the Peer Assessment, i think it bears repeating. In sum, the problem with the Peer Assessment are at least two fold:</p>

<p>1) An inherent bias with such a survey
2) The impossibility of being able to accurately "grade" every university out there</p>

<p>The closest analogy I proposed in the past was citing similar weaknesses with the NCAA College Football Coach's Poll (here are some of my previous posts):</p>

<p>
[quote]
I don't think that anyone is questioning the level of intelligence (or resumes) of those who vote for the peer score survey.</p>

<p>I think a relevant analogy is the NCAA Football Coach's poll for the BCS Championship. Each coach puts in their vote, and this poll is a critical part of the BCS rankings (it isn't the entire BCS ranking, but it is a critical component, much like the peer score is a critical component in the USNWR ranking).</p>

<p>Now, there is an inherent bias in this poll. Coach's have their own agendas when they vote (whether it be to boost their own strength of schedules or boost their own conference members). That is why at the end of this year's football season, the OSU Football coach declined to vote in the last Coach's poll because he felt there was a direct "conflict of interest" (i.e. voting between Michigan vs. Florida) which obviosly had direct National Championship implications (i.e. its a lose-lose for him, if he votes for Michigan, he gets criticized, if he votes for Florida, he gets criticized). Further, one of the other criticisms of this poll is that no active D1-A head coach is going to find the time to watch and analyze every Top 25 team in the country --> in point of fact, they are rarely looking at anything but film on the upcoming opponent (e.g. even Michigan's coach declined to comment on Florida's team because he just "hasn't seem them play") --> and yet, these Coach's are asked to rank the Top 25 every week.</p>

<p>The point? No one will argue that these Coach's understand the game inside and out, better than the average person will ever understand. Hundreds of hours of experience and film. But so what? That doesn't mean that these Coach's won't be affected by personal / professional bias --> they are rational people and will vote in a manner that best benefits them. Period.</p>

<p>So in much the same way, the folks who vote in the peer score will vote with their own personal bias. There is no escaping the inherent bias embedded in such "polls" (be it the BCS Ranking or Peer Score Ranking) --> each person will vote in a manner that best benefits them. Who cares if the person voting has a resume a mile long, it still doesn't give me any comfort on why their opinions matter on the relative merits of a Dartmouth vs. a University of Wisconsin. What makes the PA even worse is that there is ABSOLUTELY NO transparency.

[/quote]
</p>

<p>
[quote]
And this is the fundamental problem I have with the peer assessment, it's not a knock against the intelligence or experience of those participating, simply speaking:</p>

<p>1) It's not their job to know the differences between hundreds colleges
2) Even if was their job, there would be an inherent bias anyway</p>

<p>Furthermore, as mentioned before, the other fundamental problem with the Peer Score is lack of transparency:</p>

<p>3) Who are the people actually voting? Why don't they disclose who they are, and more importantly,
4) How they voted?
5) i.e. Why don't they make these peer rankings public? i.e. who ranked them and how they ranked the colleges (i have a strong suspicion that if these votes/rankings were made public and each vote had their names attached to them - they would either decline to be involved or the outcome would be different)
6) Since there is no transparency, this is the ultimate "X" / "fudge" factor --> adding / subtracting a couple of 1/10ths of a decimal point here and there until you get the list you like (i.e. ensuring not only some variance year-over-year, but that you are effectively in control of that variance).

[/quote]
</p>

<p>joshua007,
It appears to this reader that, somewhere along the way, your education did not teach you original, critical thinking. I suggest you stop deferring to everyone's else's posts and the opaque opinions of unnamed academic responders in the PA survey. It appears that you are not even attempting to understand the problems associated with this measure, but blindly accept it as it reinforces your personal view about the research-intensive colleges that you and your UK science/engineering colleagues know about. </p>

<p>Forget the names of the colleges being ranked and think about what the PA is supposed to measure, eg,
1. Can Peer Assessment be defined in a way that all agree on?<br>
2. How is PA supposed to be compiled? Is there any standard that different responders are to use in assigning their grades?
3. Who is doing the grading? Over 1300 colleges get the survey and only 58% respond which means that over 546 colleges did NOT respond. In addition, we don't know who the responders are.<br>
4. What is the legitimacy of the relative grades, eg, is there any potentially nefarious grading going on, eg, is the University of Maryland marking down the University of Miami in an attempt to end the statistical tie between these two schools?. </p>

<p>Frankly, it is hard to find the good in Peer Assessment scoring unless you are someone who is interested in a career in academia and want to know the colleges that have the highest profiles in the technical research areas. In your opinion, what else is useful about PA?</p>

<p>I suggest you read "the Wisdom of Crowds". Even the individually partially informed make good estmates of the facts when in a moderately large diverse group.</p>

<p><a href="http://www.randomhouse.com/features/wisdomofcrowds/%5B/url%5D"&gt;http://www.randomhouse.com/features/wisdomofcrowds/&lt;/a&gt;&lt;/p>

<p>
[quote]
Over 1300 colleges get the survey and only 58% respond which means that over 546 colleges did NOT respond. In addition, we don't know who the responders are.

[/quote]
</p>

<p>Response rate is measured at the individual level, not the institutional level. The response rate does not reveal how many colleges responded. USNews said that over 4,000 INDIVIDUALS were surveyed. See page 78 in the 2007 ranking volume, or post #6 on this thread.</p>

<p>hoedown,
Thanks for the clarification. So if 4000+ individuals were surveyed, that means that at least 2320 responded and at least 1680 did not. It sure would be nice which people (and schools) are in each group. Not to mention how nice it would be to know what they said and how they graded.</p>