<p>Peer assessment is worthless and biased towards public/research schools. In between schools get screwed by it.</p>
<p>"In the end, I think the PAs are probably about right, within a few tenths, for most schools, if measuring sheer reputation."</p>
<p>The margin of error for a sample size of 100 is about 10%, for 200 it's about 7%, and for 400 it's about 5%. These are pretty large to be saying that school A is better than school B even if there are 4 or 5 tenths difference. But this is only one kind of error. Is asking college administrators for their opinion of their peers flawed in itself? Is this like hanging around a basketball court and sampling height to determine the mean height of the general population?</p>
<p>tarhunt,
When I referenced the "vast majority" in my earlier post, I meant those folks who would be looking for a job in the for-profit world after graduation. I was trying to use language that would indicate that universe of students who were not looking for academic careers and I would consider that as the vast majority of students. I definitely do not limit my view to disciplines like accounting or engineering. A religious studies major or a French lit major or a speech communications major-there are all examples of liberal arts programs where many, many, many of the graduates don't go into their field of study but end up in things like banking, real estate, advertising, etc. What matters to the employer is not that they are experts in religion, French or speech, but that their experiences in colleges, including in their field of study, helped them assemble critical thinking skills that can be applied in their jobs. It is this evaluation by employers of how well students think critically and adapt to the working world that is missing in the Peer Assessment. And as PA is currently oriented (toward research grants, # of papers published, awards won, etc.), I'm not sure that it even attempts to make this vital connection.</p>
<p>
[quote]
And as PA is currently oriented (toward research grants, # of papers published, awards won, etc.),
[/quote]
</p>
<p>I'm not sure this is quite the case. </p>
<p>Here's where I'll take up the side of interesteddad, to an extent. While I don't think presidents are clueless dolts, nor do I think that presidents and provosts and admissions officers know who is publishing the most papers on a campuswide basis. Academic publishing is pretty discipline-specific. I also suspect they can and do, at least to some extent, separate research funding from an assessment of undergraduate education. If it were all about research funding and papers published, then I think we'd see liberal arts colleges taking a heck of a beating in these polls. I think it's more a looser sense of who has good resources and good faculty, who attracts smart students, who has a history of churning out impressive grads, who is being recognized in academic circles (and nationally) for excellence and innovation in undergraduate education. Of course a great research reputation serves as a "halo" over some universities (perhaps not wholly justifiably). But I think it's not correct to say publishing and research are the basis for PA ratings.</p>
<p>I think the real question is whether these presidents and such are able to distinguish between the undergraduate and graduate schools of the universities they are evaluating. I doubt they do, because you can not tell me the undergraduate experience at UCB 1.2 PA points than Wake Forest and that the undergraduate experience at Wake = the ug schooling of Colorado. They have to get rid of the peer assessment scores, because they depend upon the quality of the graduate programs. And yes, this is a general correlation, you can not deny this. And yes I take offense to this, because my school is getting screwed in these rankings. Many of the presidents and whatnot don't even return the forms, there was an article about that someone once posted.</p>
<p>If you took out PA this is what the rankings would <em>roughly</em> look like, as I didn't know the details, and I had to crudely remove it. I like these rankings. Furthermore it highlights UVA as the cream of the crop for publics for UG, which I 100% agree with.</p>
<p>Harvard University 1
Princeton University 1
University of Pennsylvania 3
Yale University 3
Duke University 3
Stanford University 6
Massachusetts Inst. of Technology 6
Washington University in St. Louis 6
Dartmouth College 6
Northwestern University 10
Brown University 10
California Institute of Technology 10
Columbia University 10
University of Notre Dame 10
Rice University 15
Cornell University 15
University of Chicago 17
Johns Hopkins University 17
Emory University 17
Vanderbilt University 17
Tufts University 21
Georgetown University 21
Wake Forest University 23
Carnegie Mellon University 23
University of Virginia 23
Lehigh University 23
Univ. of Southern California 27
University of California-Los Angeles 28
University of Rochester 28
U of North Carolina-Chapel Hill 28
Brandeis University 28
University of California-Berkeley 28
Case Western Reserve Univ. 33
University of Michigan-Ann Arbor 34
College of William and Mary 34
Boston College 34
Yeshiva University 34
New York University 38
Tulane University 38
Univ. of California-San Diego 38
Rensselaer Polytechnic Inst. 41
Univ. of Wisconsin-Madison 42
Georgia Institute of Technology 42
Univ. of California-Santa Barbara 42
Syracuse University 42
University of California-Irvine 46
U of Illinois-Urbana Champaign 47
University of Florida 48
University of Washington 49
Pennsylvania State University 50
University of California-Davis 51</p>
<p>hoedown,
I accept your comments fully-I'm not trying to say that this is how PA is determined-I don't know nor has anyone yet shown much knowledge about its calculation. I am basing part of my comments on what I have learned on CC and from conversations with some acquaintances in the academic world. But your words highlight part of the problem-there is no actual, measurable, or consistent basis for how Presidents, Provosts, & Deans of Admissions grade a school or compare one school to another. Everybody kind of has their own internal standard for making these judgments. Furthermore, their views may be based on information that is either outdated, incomplete, or just plain wrong. </p>
<p>An expression that I often use is that some (most?) of these evaluators are "marching into the future using the rear view mirror." Now, granted some reputations (whether established by a school, a company or even an individual) are established over long periods of time and don't deserve to be swept away in a year or two, but things do change in the world. But, IMO, the PA, with a flawed methodology and inconsistent interpretation, promotes a status quo that may not accurately reflect what is going on in American college education and is almost certainly not responsive to the interests of many important stakeholders.</p>
<p>hawkette:</p>
<p>
[quote]
When I referenced the "vast majority" in my earlier post, I meant those folks who would be looking for a job in the for-profit world after graduation. I was trying to use language that would indicate that universe of students who were not looking for academic careers and I would consider that as the vast majority of students.
[/quote]
</p>
<p>And so would I. But implicit in this issue is whether students choose colleges looking at some sort of long-term financial gain. I know that none of my children have done this. So whether PA is some sort of measurement of value to businesses requires an examination both of its value and of its motive utility.</p>
<p>I think I've made it clear that I do not think PA measures some sort of value-added for the business world. But I would contend that there is a relationship, and perhaps a strong one. I can't prove that, of course. Nor can anyone prove much of anything here. We are talking only in rational probabilities. But if PA tends to measure some combination of student quality, faculty quality, and devotion to undergrad education (which I why I think Dartmouth has such a high PA), then it probaby has some relationship, and perhaps a strong one, to the value of grads for various careers.</p>
<p>As there is no national measure of teaching effectiveness, the PA does a pretty good job of at least ranking the overall reputation of the faculty. It is pretty consistent with such numerical measures as major awards won, NAS memberships, and research quality and quantity. For large schools the following has all the important numbers.</p>
<p><a href="http://thecenter.ufl.edu/research_data.html%5B/url%5D">http://thecenter.ufl.edu/research_data.html</a></p>
<p>hawkette:</p>
<p>About looking into the rearview mirror. As you know, this is life. Practically all current decisions are made, at least in part, on past experience. It's the human condition.</p>
<p>When it comes to schools, I think you'll find a few exceptions. Olin is quite a well-respected engineering school, despite being brand new. I think any school with its characteristics would earn great respect very quickly.</p>
<p>But I will agree wholeheartedly that exceptions are few. Having said that, institutions of higher education change at a glacial pace. It doesn't happen that a school with few faculty members in prestigious academies suddenly has hundreds of them. Schools with average students don't suddenly turn into schools with brilliant students. Class sizes don't suddenly go from an average of 40 to an average of 12.</p>
<p>"For large schools the following has all the important numbers."</p>
<p>Really? Here's all I need to know from you link regarding the methodology behind the top american research universities.</p>
<p>"For the graduate and research instructional dimension, TheCenter provides the number of doctorates awarded and the number of postdoctoral appointments supported; for the undergraduate quality, TheCenter offers median SAT scores as indicators of student competitiveness."</p>
<p>This adds nothing to peer assessment for undergraduate education. In fact, it highlights the problem when mixing grad and ugrad in peer assessment.</p>
<p>tarhunt,
Interesting points. Re your suspicion about a relationship between PA and value-added for the employer, I agree that there may be one. But is this attributable to the inherent qualities of the student and his student peers (my suspicion) or to the faculty and their ability to teach and prepare the students? In my experience, faculty, of course, want their students to learn and to achieve, but their own careers are not heavily judged on this basis (unlike a coach on an athletic team or a typical sales manager working in the for-profit world). Should they be? I'm not sure, but I do know that students and graduates must care deeply about the financial benefits that their college experiences may bring to them. If they don’t, they can't fund their lives because as we both know, even a good PA doesn't pay any bills. </p>
<p>As for changes in reputation, they do move very slowly in the academic world. An analogy for this in the business world might be a company like Eastman Kodak. For decades, Kodak had the dominant franchise in the photography industry and amassed great engineering and manufacturing resources. But the world changed. Fuji came on the scene and showed that someone else could do it too. Then the digital revolution came and new technologies were introduced that replaced many of the traditional products sold by Kodak. If this were the academic world, I suspect that Kodak would still have had a high PA throughout these periods of challenge because it continued to have many strong engineers and manufacturing facilities and still had great internal resources. But that was ultimately irrelevant to the real world which understood that Kodak no longer had a monopoly on great intelligence and engineering and that there were multiple other sources that could supply their needs. Kodak survives today, but in a much different and far weaker position than a decade or two ago. Stories like the decline of Kodak are plentiful in the business world as things are always changing and the world is always evolving. Why should the academic world be any different?</p>
<p>Please read my post first--number of faculty awards, NAS members, research. Those were the numerical items I was looking at.</p>
<p>Hawk- the academic world moves at a pace best described as glacial. Most faculty at better schools stay 20-30 years.</p>
<p>barrons,
I'm not sure that I accept the idea that faculty staying at a school for 20 years is necessarily good or a sign of excellence (if that's what you meant). There are many examples in the business world, in the athletic world, even in the non-profit world, where longevity of service actually leads to lower levels of achievement. For example, in my analogy, I'm sure that the engineers at Kodak were all considered first-rate and among the best in their field, but in the end, it didn't really matter. Others came along with extensions to their work or entirely new ideas and the Kodak film franchise was put under severe pressure. Stability is good, but excellence and innovation are better and my suspicions are that the PA in the academic world reflects more of the former and less of the latter.</p>
<p>If you look at the schools winning the Guggenheims and Fulbrights, there is not much change over the years. Also in reality most academic subjects at the UG level don't change that much. Basic physics, chemistry, english, math, french, german etc are not that changed from 20 years ago despite some exciting discoveries.</p>
<p>hawkette:</p>
<p>I firmly believe that there are three, very important components of "good" education. One is faculty quality, and I believe that has to do with how well the faculty actually structure the learning activities. As you have pointed out and as I have agreed, research excellence does not translate directly to instructional excellence. However, I would be hard pressed to prove that distinguished research faculty are any worse at the job than those who don't research very well.</p>
<p>The second and third elements are linked. Very smart kids who get to interact and debate with other very smart kids leave college with better skills than those who are either not as smart or who don't get to interact in class as often. Class debate is an opportunity to actually practice thinking skills in an environment where one can be wrong or right based on rules of logic. Very smarten students sharpen their minds against each other in an environment like this.</p>
<p>The Eastman Kodak example doesn't work for me. Business reputations tend to hinge on financial performance, most of which is publicly stated for the very largest companies. When businesses fail to adapt to changing conditions, their reputations tend to plunge rather precipitously rather quickly. No such environment exists in the academic world. A don from 16th century Cambridge would have no trouble recognizing the modern university and most of its methods.</p>
<p>If I may paraphrase (and as you have stated elsewhere), good education is best determined by the quality of students, the quality of faculty (teaching or research?), and the size of classroom. I very strongly agree. One could use SAT scores and other scholastic data to assess the quality of the students and could use faculty/student ratio and CDS data on % of class sizes above 50 and below 20 to evaluate the size of the classroom. However, as this thread was originally created to debate the usefulness of the PA, I struggle with the interpretation of the data we have available to us on faculty. In terms that academics appreciate, as Barrons points out, the PA is probably pretty accurate. I'm not sure that teaching quality translates through a PA, but I am not concluding that highly esteemed research professors can't also be good teachers. It is just that they are not being measured on this basis (as best I can tell) and thus I don't automatically assume that they are the best teaching faculty for the average undergraduate. </p>
<p>As for your comments about smart students and classroom debates, they are off the topic of this thread, but I get excited just thinking about it. This is what I am referencing constantly in other threads when the discussion turns to the importance of a consistently strong student body and the importance of class size and how those are important factors in differentiating colleges and the experiences that they can offer. </p>
<p>Re your comments about change in the academic world, the nature of what is being taught may not have changed, but where it is being taught definitely has. Twenty-five years ago, there was a much stronger concentration of academic resources in the Northeast (the people and resources like libraries played a large role in this), but today the education world is much more devolved. As a result, great students have many more options today than then and, while the world has certainly not flipped, there is greater parity than ever before among the top colleges (and I include many schools in the South, the Midwest, the Southwest and the Northwest in saying that). With a more mobile society and sophisticated technology and the inestimable impact of the internet, information (and thus power, intellectual and otherwise) is now spread around the country, if not around the globe. I would be interested to see the PAs from 25 years ago as I suspect that the changes to today's numbers don't reflect the enormous changes that have taken place elsewhere in American society.</p>
<p>hawkette:</p>
<p>Thanks for the thoughtful conversation. I will disagree (mildly) one last time. Twenty-five years ago (1982), the schools in this country that tended to attract the best students and have the smallest class sizes were pretty much the same as today. I think WUSTL is a notable exception but, even back then, the quality of their student body was quite high. As for members of national academies as a percentage of the entire faculty, I believe this has changed little. The most profound change in higher ed in the US in recent times was actually post-WWII, when the modern research university was, essentially, born, and when the GI bill made a college education so common that almost everyone began to feel the need to have one.</p>
<p>There certainly was a shakeup in rank order during that time, and I believe you'll see that reflected in a relatively high PA for schools like Wisconsin, where the student body is just a bit better than average.</p>
<p>
[quote]
Twenty-five years ago, there was a much stronger concentration of academic resources in the Northeast (the people and resources like libraries played a large role in this)
[/quote]
</p>
<p>I'm with Tarhunt--what was so greatly different in 1982? It is true that the digitization of research source material has increased access to great holdings at Universities (and elsewhere) but it's not clear to me that the share of resources (be they faculty, students, talent, what have you)amongst colleges and universities has been redistributed to any great degree. Can you provide some examples?</p>
<p>I'm surprised American's PA is so low.</p>