Woops. Sorry presbucky, I left WUSTL out by mistake. I have edited my post above. You are quite correct, WUSTL’s PA rating is 4.0.
@Alexandre - there have been reports how UW Madison for example had to rank 262,“peers.” The Provost gave 260 peers adequate rating (second lowest score) and that group included Harvard, Yale, Cal, Stanford, etc). The other two schools he put with highest rank were his own school UW Madison and the New School (where his kid attended).
Instances of UF and Clemson ranking themselves well above their peers. (Sounds like gaming to me)
Malcolm Gladwell from the New Yorker made the point that these peer assessments to rate other schools is less a measure of a school’s reputation than it is a collection of prejudices partly based on the self-fulfilling prophecy of U.S. News’ own ranking. Surveys are often completed by consulting the previous year’s rankings and simply fill in the blanks.
I guess you could say that the overall ranking loosely follows the peer assessment, but there is at least one notable difference:
- The public schools listed above are ranked lower overall than their peer assessment rankings, due obviously to lower scores in other categories.
Still, if you remove the public schools and concentrate solely on the private schools listed above, their peer assessment scores match up fairly well with their overall scores, which means that they rank relatively the same overall in the other categories. That is to say, these schools are good in areas other than rep; there’s some meat there…
Further musings: If a private school had an academic rep (PE) rank of, say, 20… and an overall ranking of, say, 60… then I think we could refer to that school as a Paper Tiger, Rep Warrior, etc.(hehe) Or that they excelled in areas that USNews doesn’t deem important.On the flip side, if a school has a much lower PE rank than their overall rank, we could perhaps say that the school is underrated by the academic community.
“Instances of UF and Clemson ranking themselves well above their peers. (Sounds like gaming to me)”
Which is why outliers are omitted from the score. It is impossible for a university to game the PA rating. If you want to see “gaming”, you should see how private universities report data to the USNWR. They exaggerate a great deal. At least universities cannot manipulate their own PA rating. It is the average opinion of hundreds of university presidents and deans of admissions.
“there have been reports how UW Madison for example had to rank 262,“peers.” The Provost gave 260 peers adequate rating (second lowest score) and that group included Harvard, Yale, Cal, Stanford, etc). The other two schools he put with highest rank were his own school UW Madison and the New School (where his kid attended).”
Again, outliers are deleted, so some of those irregularities are flushed out. Furthermore, even if some irregularities, inconsistencies and manipulations get past the outliers net, they will probably not be significant enough to alter the average rating of a university. Clearly, Harvard, Yale, Stanford etc…'s PA did not take a hit as a result of Wisconsin’s rating. When you have hundreds of participants, the average rating will likely be a fairly accurate representation of reality.
“Malcolm Gladwell from the New Yorker made the point that these peer assessments to rate other schools is less a measure of a school’s reputation than it is a collection of prejudices partly based on the self-fulfilling prophecy of U.S. News’ own ranking. Surveys are often completed by consulting the previous year’s rankings and simply fill in the blanks.”
I am not sure I agree with Malcolm Gladwell. I think most presidents and deans of admissions take the rating seriously, especially when rating tier 1 universities (say the top 100 or so universities).
@Alexandre
I am not sold on why Peer Evaluation is useful.
If I was to choose a doctor, buy a house, lease a car, or pick a college …
I don’t think Peer Evaluation would be a major driver in my decision making.
Not sure how it pinpoints the quality, experience or outcome of the professional or organization.
College Provosts have not visited let alone experienced most of the schools they are supposed to rank. They know perhaps a view data points - where the school is currently ranked, any possible interaction with a counterpart at that school, and maybe any major research or Grants the school is conducting.
Whereas Tony voters need to personally see every show nominated, college peer evaluators might have hundreds of schools they are supposed to rank with no controls in place whether they actually know each school.
In the end it seems to be just another data point of many that has some value and significant limitations.
I am not sure - Peer Evaluation is probably one of the most valuable metrics. They are actually very valuable for finding somethign like finding great doctors.
@yikesyikesyikes why is “peer evaluation one of the most valuable metrics”?
“- The public schools listed above are ranked lower overall than their peer assessment rankings, due obviously to lower scores in other categories.”
Sadly prezbucky, public universities’ lower scores in other categories are usually a result of irrelevant factors (hello alumni donation rates), differences in financial realities and dishonest data reporting by private universities…none of which are an indicator of quality. For example, the US News rewards universities that provide a great deal of financial aid, but does not reward universities that have lower tuition rates to begin with, and therefore do not need to provide as much financial aid. In other words, whether by accident or by design, the US News methodology is biased against public universities and rewarding private universities for financial realities that neither can control, and that have no bearing on the quality of the institution.
Private also universities exclude thousands of graduate students from their student to faculty ratios, dropping their ratios from 10:1 or higher to 8:1 or lower. Class sizes are diluted as a result of hundreds, and in some cases, thousands of seminars designed entirely to hit the 70% classes under 20 and 10% classes over 50 figure etc…
Out of a lot of the metrics, they are among the most informed.
This may be a rough analogy, but it is similar to the fact that the most reputable research is peer-reviewed.
@yikesyikesyikes - would you say the Provost at the University of Washington is the “most informed” about the educational quality at SUNY Buffalo? They are both AAU schools thousands of miles apart. Likely he/she isn’t traveling from Seattle to Buffalo to be engaged with what is happening on campus. Or many of the few hundred schools he/she is tasked to peer review.
That being said, what would the value of that peer review be?
Are there others who are more informed about SUNY Buffalo to review it?
Individuals that don’t know enough to rate a school fairly are asked to mark “don’t know” - per the methodology. Doesn’t mean they do, but that is the direction.
By that logic, employers/grad schools should only judge a college’s quality if they have personally experienced it.
ClarinetDad, you do not think medical patients should take referrals from their family doctors when it comes to choosing specialists? You do not believe expert opinions for cars, hospitals and housing markets are valid? To me, experts’ opinions do matter, if the source is reliable. Of course, there are many other metrics, although those tend to be very easily manipulated by universities since they are reported by the universities themselves and they are not audited for consistency or accuracy.
@Alexandre I totally understand the value of referrals. I can personally vouch for the work of Dr smith you should use them too she is excellent at x,y,z.
But would I ask a doctor in Seattle for a peer review for a specialist in a town within the Buffalo area? A place she has never been? Doctors she doesn’t work with or have patients in common? Never worked together or seen their facilities? Probably not…
But college peer reviews get a fairly high response rate. (Not a 90% I don’t know here are the 15 schools I know welll…). Seemingly many are reviewing peers they know a cursory level of information about. It has been manipulated and there are no controls to verify anything beyond they checked a few boxes and submitted.
Clarinetdad, first of all, part of what makes the PA interesting is the fact that it is a reputational score. There is no claim that it is scientific, or 100% accurate. A university’s reputation may not matter to some of us, but for those interested in graduate school, attending a university with a strong reputation in academe certainly cannot hurt when application time comes around.
Second, it is the job of a university president to know what is happening on the campuses of their peer universities. They work with several other universities on shared goals and initiatives, they lose and gain faculty from each other, they benchmark each other for best practices etc…Unlike doctors, that are mainly regional, universities collaborate across state lines.
Third, presidents and deans are instructed to only rate universities they know, not all universities. That does not mean that all presidents follow instructions to the letter, but I would assume the majority will take a few minutes to sit and rate the universities that they know well enough and not waste their time rating all universities. That being said, an established university president, that has experience studying and working at half a dozen universities across the nation, with 30 years of experience in academe, should have intimate knowledge of dozens, if not hundreds of universities. It is their job. Those men and women spend thousands of hours examining peer universities in detail over the course of their careers.
Fourth, most presidents have worked at more than one university. For example, take the previous UDub president, Michael Young. He completed his graduate studies at Harvard, worked as faculty at Columbia and George Washington, and served as president at the University of Utah, the University of Washington and now Texas A&M. It is safe to say that he has intimate knowledge of how things work at all the universities he has been associated with, as well as with all of their close peers. That includes all the Ivy League, Georgetown, American, all Pac 12 schools and all SEC/Big 12 schools. Michigan’s president, Schlissel, spend a decade at JHU, first as a doctoral and medical student, then a medical resident. He also spend many years at MIT, Berkeley and Brown. Again, it is safe to say he is very familiar with all those universities, as well as with their peers, such as the Ivy League, the Big 10, the Pac 12, Caltech etc…
I think university presidents are very qualified to rate universities. Certainly more so than some magazine where none of its employees have an ounce of education, and where the data they compile is mostly faulty.
At any rate, you seem to think there is a more accurate way of determining a university’s reputation. What do you propose?
I would like the raw numbers that go into many of these rankings and assign my own weights to them. This would yield a more personalized more insightful ranking. Peer evaluation could be part.
It should be more about School A is in a tier of peers not let’s compare a huge public to a small religious university to see which has more points…
clarinetdad, I agree. Allowing each individual to mix and match raw numbers would be nice. Sadly, the raw numbers you speak of do not exist, and even if they did, they would not be telling unless one is comparing very similar universities.
To clarify:
- Many universities, particularly private universities, take their liberty with reporting data, such as inflating financial figures on the expenditure stream. And even if the financial figures were not subject to "fuzzy math", how does one compare the impact of expenses on the students of a large public university, which benefits from incredible economies of scale and hundreds of millions of dollars in state funding, to a tiny liberal arts college, which does not benefit from economies of scale and does not receive state funding?
- Many universities, again, mostly private, omit thousands of graduate students from their student to faculty ratios. Public universities almost never resort to such underhanded tactics.
- Most private universities also resort to imaginative tricks to increase the percentage of classes with fewer than 20 students while decreasing the percentage of classes with more than 20 students. Again, public universities do not resort to such tricks.
If we could see the raw data the rankers use in the methodology for their ratings we could compensate for any inconsistency or simply not weight that data.
I agree CharinetDad16. The best way would be for a non-profit agency to collect and seriously audit the data before releasing it. Sadly, the raw data would be so inconclusive and random, that it would not be of interest to most of us, and therefore, not marketable.