<p>on a complete side note, it shows how far ahead UVa is vs. the rest of the publics in tangible items ;-).</p>
<p>had to throw it in there...heh.</p>
<p>on a complete side note, it shows how far ahead UVa is vs. the rest of the publics in tangible items ;-).</p>
<p>had to throw it in there...heh.</p>
<p>"I think standrews has this mixed up. Obviously, if the response from 200+ schools is 100%, there can be no margin of error, since there is a 100% sampling."</p>
<p>tarhunt, jags:
You're right. I am mixed up. Let me explain why. I struggled with trying to determine whether the peer assessment amounted to a census, where there is no sampling error, rather than a sampling of opinion (president, provost, admissions) at peer institutions. I decided, perhaps wrongly, that it was a sampling of opinion. Just because all of the institutions were sampled does not make it a census. If it did, even asking one individual at each school would be a census under this definition. Why bother with three? </p>
<p>Suppose USNRW ranked the 50 states by including a peer assessment consisting of the opinions of the Governor, Lt. Gov, and attorney general of each state. Surely this is a sampling of opinion within the state. The title of the individuals or the fact that they are highly likely, but not certain, to have an informed opinion does not change that. There are many more individuals that have an informed opinion within each state. This is also true of the the institutions involved in USNWR's PA, even at the smaller schools. </p>
<p>Take a look at USNWR's methodology for determining PA of law schools. Even though law schools are smaller in size, the sampling of opinion is expanded. Law school deans, deans of academic affairs, the chair of faculty appointments, and the most recently tenured faculty members. The percentage of individuals being sampled goes way up. Again, why would USNRW spend the time and money to do this if a census could be performed by asking one individual?</p>
<p>I am not a statistician and would welcome any criticism.</p>
<p>standrews:</p>
<p>I see what you're saying, but I think it's shaky ground. Understand that the methods are published and, within the limits of the methods, the margin of error will vary with the voluntary response rate and, as you know, it's not possible to nail confidence or margin of error on volutary-response surveys.</p>
<p>Looking at it the way I think you're trying to, you'd need ~ 400 responses from a random sampling to get a 95% confidence of +/- 5%. But the sampling isn't random or even nearly random.</p>
<p>I think we have to take the numbers for what they are as described by US News. We'll never get an accurate idea of how many tenths are significant. When I guess at that, it's just a guess. I think .5 is probably significant in the eyes of the respondents because it would mean that half of them rated school A an entire number higher on the 5-point scale than school B. I think a single tenth is certainly not significant, and I would also (personally) pay little attention to two tenths.</p>
<p>But I have no sound basis on which to do that.</p>
<p>While the methodology is explained by USNWR (opinions are given by Presidents, Provosts & Deans of Admissions), there is not a standard measurement that is made by these voting individuals, but rather their personal opinions based on what they hear and what they read. Granted, you would expect these individuals to be in a position to make SOME informed judgments on SOME schools, but highly informed judgments are likely limited nor is it clear at all how these judgments are made. Thus, any decisions based on a statistical analysis of these opinions would itself be inherently flawed. Garbage in, garbage out. I say this not to characterize their individual opinions as garbage, but to highlight the uneven, inconsistent quality of the data being input which most certainly undercuts the quality of the statistical result. And this does not even speak to the relevance of these opinions to the non-academic stakeholders (high schoolers, college students, families, alumni, corporate recruiters).</p>
<p>Trying to separate the peer assessment variable from USN&WR is a little like trying to be "a little bit pregnant". You either accept the whole concept of trying to rank extremely similar educational institutions through intricate weighted variables, or you don't. I don't happen to, and have been saying so for close to ten years, now. But, in for a penny, in for a pound, I say. ;)</p>
<p>standrews,</p>
<p>heres a common sense reason for why you're flawed. if there was that big of a margin of error, it wouldn't be useful in determining a ranking based off of it. If 3.9 and 4.1 can separate 2 schools by 10 places, then theres a problem if they could potentially be reversed in positions.</p>
<p>Also, thinking logically, assuming that no ones opinion changes within the current year, no matter how many times you do the survery, the results are always going to be the same. So 100% of the population is responding with the same answer 100% of the time. There is no margin of error if the same survery ALWAYS produces the same result.</p>
<p>jags:</p>
<p>I don't follow that argument. Just because a reversal of results would produce changes in position doesn't mean the survey isn't flawed. Did I miss something?</p>
<p>
[quote]
Peer assessment is the best way we have of gauging a university's undergraduate academic prowess. USNWR sends surveys to the country's best in academia. If they don't know where a university lies in academics, than who does?</p>
<p>If you think you've got a better idea on how to measure a university's undergraduate PA, USNWR will probably be thrilled to read it. Anyway, I think PA gives accurate levels for the universities it lists.
[/quote]
</p>
<p>Regardless if the forms are completed by a secretary on behalf of a provost or a president, what is undeniable is that vey few individuals would be able to HONESTLY and COMPETENTLY answer the questions of the surveys. In an often quoted example, what does the person who fills Pomona's forms REALLY know about Reed or Rhodes College, and what "tools" does he have to correctly evaluate Swarthmore versus Pitzer or Oxy? </p>
<p>The PA does have "some value" but it is its use as a major quantitative measure that is so flawed. A closer look at some of the perennial favorites (such as non-coed schools in the LAC tables and very large public schools) of the PA clearly shows a high degree of cronysim and geographical preferences. In the end, the exercise is one of utter futility, if not downright gamesmanship.</p>
<p>There is NOTHING scientific in the PA and the fact that USNews relies on it for the LARGEST element of its rankings speaks volume about their integrity and objectives. The PA and the USNews conclusions represent a self-fulfilling prophecy: the same "great" schools will stay at the top with only sufficient movements to induce people to buy the latest editions. The leading source of information used by the responders is probably last year's edition of the USNews report!</p>
<p>The PA is THE tool that USNews uses to equate and manipulate the results at will. While a few schools are investing considerable time, effort, and money to complete the surveys, many simply decline, ignore, or ... use the system to further a agenda of their own, which may include boosting the value of "sister colleges" and sink others with impunity and glee.</p>
<p>I am sure you have proof to support up those claims. In fact the PA generally follows other factual criteria such as faculty awards earned, research awards, publishing records, NAS memberships etc. Nobody has actually figured out how to measure teaching effectiveness.</p>
<p>johnwesley,
Your point is well taken about the wisdom of accepting USNWR, but in the absence of anything, such rankings will naturally fill the void. Simplistic though they may be, the general public nonetheless is influenced by them. One may certainly choose to ignore USNWR, but the use of their rankings and their data is widespread so my reaction is that it is better to understand what and how they are evaluating and weighting than it is to dismiss it entirely. </p>
<p>My consistent suggestion has been for each person to look at all of the objective data and the subjective data and then make their own individual judgments on the importance of each individual factor. For example, some public school partisans complain that the alumni giving factor is unfair or irrelevant to them because of their school size and the fact that these schools receive state funding. Or some private school fans complain that Peer Assessment scores favor the public universities and their large research budgets which often have little relevance to one's undergraduate experience. (BTW, I consider both complaints valid.)</p>
<p>I am a bit surprised by your statement that these are "extremely similar educational institutions." From my perspective, I see far more differences than similarities. Comparing a large state university to a Wesleyan or a Brown is going to bring out some pretty stark differences in the objective data. Moreover, more than most people, you can probably appreciate the folly of comparing the PA of a UCLA (4.3) to that of a Wesleyan (4.3) or a Brown (4.4). The objective data at least give you some clues of what the undergraduate experience is likely to be. It is far less clear what the Peer Assessment tells us and, IMO, its inclusion in rankings like USNWR is subversive at worst and inconclusive at best. </p>
<p>to supporters of the use of Peer Assessment scoring,
Can you make any counter arguments or is there generally a consensus here that PA scores should either be scrapped or heavily revamped?</p>
<p>Tarhunt,</p>
<p>IF there were a significant margin of error, then the PA would not be a reliable source of information used to rank universities. For example, if washington Universities 4.1 PA score were actually 3.9 according to some sort of margin of error, its ranking drop significantly, while a school like Notre Dame, which has a 3.9, bumped up to 4.1 would drastically increase its ranking. If this were the case, then we would have to assume that the USnews rankings use an incredibly flawed piece of information account for the largest weighting in their methodology - and that would be stupid to do.</p>
<p>Now the actual point I was making was that margin of error assumes you only survey a "sample" of a larger population. There is no margin of error in this case because we have defined the population we wish to survey (university presidents, provosts, and admissions deans) and we are able to reach every single member of that population and get a response from everyone. If there is a margin of error, we can assume that a given survey taken multiple times will produce different results within that margin of error. In this case however, no matter how many times the survey is given, the survey will ALWAYS produce the same results - assuming that those surveyed give the same response everytime. Therefore if there is no possibly "other" results there is no margin of error.</p>
<p>We cannot assume that there is a margin of error because "other people besides university presidents have opinions on these schools." The PA is a measurement of what peer university presidents think of other schools, not what the general academic populace thinks every school. </p>
<p>Look at this example. Say a similar survey would go like this. "What do hospital doctors think of patient care in hospitals? rate it from 1 the worst to 5 the best." And every single doctor who works in a hospital is surveyed and responds. There is no margin of error just because nurses wern't surveyed and they work more closely in patient care than doctors do.</p>
<p>The research U's and LAC's data are not directly comparable as they are in separate survey groups. You can't say UCLA=Wes from those numbers.</p>
<p>"I think we have to take the numbers for what they are as described by US News. We'll never get an accurate idea of how many tenths are significant. When I guess at that, it's just a guess." - tarhunt</p>
<p>"It is far less clear what the Peer Assessment tells us and, IMO, its inclusion in rankings like USNWR is subversive at worst and inconclusive at best." - hawkette</p>
<p>I would agree with the observations above. However, there is really no reason why we should have to guess about what USNWR is doing with PA. USNWR could make it all clear, but I can only assume that they choose not to. While they explain the methodology, you really can't get behind the numbers they publish, at least not that I know of. Perhaps USNWR them self doesn't really know, but as long as they have numbers, they move forward. What is the response rate? Who really fills out the responses? What schools have a high/low number of "don't know" responses? What is the statistical error involved? Is there a difference between what presidents as a group think versus provosts or admissions deans? What percentage of individual respondents carry over from year to year? The devil is in the details and the details of this sausage-making process are unavailable. I suspect revealing such aspects about PA would undermine the numbers, the rankings, and sales of USNWR's swimsuit issue.</p>
<p>Hawkette said:</p>
<br>
<p>I am a bit surprised by your statement that these are "extremely similar educational institutions." From my perspective, I see far more differences than similarities. Comparing a large state university to a Wesleyan or a Brown is going to bring out some pretty stark differences in the objective data.<</p>
<br>
<p>Would that were so. I have no objection to the tier system which frankly, USNews pretty much invented. The folly is that they don't stop there. The bulk of what fuels these endless threads are the first twenty to thirty colleges in the so-called, "national" categories in between which USNews spends an inordiate amount of ingenuity and whimsy trying to force daylight. I stand by my statement that there's an exteme degree of similarity between Stanford and Harvard (and between Stanford and Georgia Tech, for that matter) and that any real differences between them, are so beyond USNews' ability to capture, that tinkering with the PA portion is a little like putting snow tires on a toy car: it makes sense in only a very small, make-believe universe. ;)</p>
<p>Indeed, it's all about factual knowledge, and not about Christmas cards, golf balls, and glossy brochures!</p>
<p>
[quote]
The reputation survey has drawn an increasing torrent of criticism from college leaders, who find it unscientific and unfair. </p>
<p>"It's a beauty contest," scoffed Patricia A. McGuire, president of Trinity College in the District, who said she ripped up the survey U.S. News sent her this year. A recent poll of presidents by the Association of Governing Boards of Universities and Colleges, a national organization for trustees, found 70 percent who believed reputation was emphasized too heavily in the rankings, and 38 percent who demanded an end to reputational ratings altogether. </p>
<p>*Seven percent admitted that they had intentionally downgraded the score of a rival school to make their own look better. *</p>
<p>"It's part of a marketing mania that's taken hold in higher education," said Howard University President H. Patrick Swygert, who acknowledged sending an annual letter to "400 or 500 of my closest friends" to note the school's latest achievements. </p>
<p>"We all object to treating higher education as a commodity," he said. "And most of us do it."
Myron Roomkin, dean of American University's Kogod School of Business, received three packages in three weeks -- from a rival business school he won't identify -- on the eve of a recent reputation survey. Included were a box of golf balls, a five-pound Hershey chocolate bar and a jar of chili peppers with a reminder that "when you think of something hot, think of us." </p>
<p>Another school sent an elaborate brochure that Roomkin said his own marketers estimated had cost $20 apiece to print. The brochure arrived by costly overnight mail. </p>
<p>"People are genuinely concerned about the rising cost of education, so you have to ask yourself, are we spending the money on the right thing?" Roomkin said. </p>
<p>U.S. News officials say they conduct the reputation survey to help gauge intangible virtues, including the quality of teaching and learning, that are not captured by more objective measures. This year, they changed the category's name to "peer assessment," acknowledging presidents' discomfort with the ambiguities of the word "reputation."
[/quote]
</p>
<p>** "Every single president I worked for here, ... they'd say, 'Larry, fill these out if you want,' " Virginia Tech spokesman Larry Hincker said. **</p>
<p>
[quote]
The data that receives the most criticism -- and the most weight in U.S. News -- is peer assessment.</p>
<p>Twenty-five percent of a school's score is dependent on this 1-to-5 ranking. Beyond the conflict of interest of officials evaluating colleges near them in the rankings, there is the problem of asking people to accurately assess hundreds of schools.</p>
<p>Officials can choose to leave scores for some schools blank and sometimes don't fill them out at all. The assessments get about a 57 percent response rate, though they're not always filled out by the people they're sent to.</p>
<p>"Every single president I worked for here, ... they'd say, 'Larry, fill these out if you want,' " Virginia Tech spokesman Larry Hincker said.
[/quote]
</p>
<p>All the schools mentioned were certainly top tier <scoffs>. Maybe there is a bit of gaming among the schools of a certain level but even some minor (7% being relatively minor) misscoring will not greatly impact the other 93% who try to be fair. I would bet they have a way of detecting the obviously biased and tossing those votes out. But thanks for providing some evidence no matter how limited. Sorry but I have never heard of Trinity College in DC.
I would be happy if they switched to a numerical measure that included NAS members, major awards won by faculty, research awards, etc.
Shanghai Jiao Tong University (<a href="http://ed.sjtu.edu.cn/rank/2006/ARWU2006TOP500list.htm%5B/url%5D">http://ed.sjtu.edu.cn/rank/2006/ARWU2006TOP500list.htm</a>) and the Center (<a href="http://mup.asu.edu/%5B/url%5D">http://mup.asu.edu/</a>) both have attemped quant based rankings with similar results for the truly similar large research schools.(does not wok well for Brown the like)</scoffs></p>
<p>The NAS uses essentially the same method (but surveys all faculty) to rank most graduate most grad depts every 10 years or so. If you aggregate their findings into an overall ranking--it comes out pretty close to the US News scoring for the major schools. The usual suspects are always near the top and some of the state schools do better than many elite worshipers would like to admit.</p>
<p>"But thanks for providing some evidence no matter how limited."</p>
<p>Barrons, I've posted more "anecdotal" evidence in the past, including assessments by people who were very close to USNews (Mrs. Graham) and a law professor who criticized the survey he USED to complete. See <a href="http://www.piercelaw.edu/tfield/usnwr.htm%5B/url%5D">http://www.piercelaw.edu/tfield/usnwr.htm</a> </p>
<p>Inasmuch as you are correct about the 7% who ADMIT manipulating data, it would be a stretch to conclude that 93% are filling the form ... honestly. As far as the cronyism --especially along geographical lines-- why don't you take a look at the schools that were called the Seven Sisters? How to the remaining schools from that august group compare to say ... colleges in the South or the West? </p>
<p>However, the most suspect factor that blemishes the USNEWS is its absolute lack of transparency. Why not make the famous survey public and post the answers from all the schools involved. Outsiders now have access to a growing set of numbers via the released CDS reports. Why not extend the release of data and let the world see how Wellesley and Berkeley DO vote, and WHO filled and signed the darn forms. Let's see how many blanks are left and see which schools exercise good judgment over gamesmanship. We, as a society, are expecting more accountability from public companies and are no longer permitting the people who sign and authorize public date not to know. Why can't be expect the same from schools, especially since it is only about their "opinions" about their peers. </p>
<p>Why does it have to be secretive? Can't USNEws afford to produce a searchable database? Wait, that is exactly what they are selling!</p>
<p>PS <a href="http://www.trinitydc.edu/schedule/president/index.html%5B/url%5D">http://www.trinitydc.edu/schedule/president/index.html</a> and <a href="http://www.trinitydc.edu/about/president/%5B/url%5D">http://www.trinitydc.edu/about/president/</a></p>
<p>Pierce Law, Trinity in DC--a regular who's nobody lineup of schools. I'd be quite happy if three US News editors poured over all the available data and did their own rankings and made that the final word. My guess is the results would not differ markedly. What exactly about the rankings bothers you? I can accept that most would place UVa and UCLA slightly ahead of UW on the undergrad PA score and I could certainly make an argument that UW>>Indiana or Iowa. Actually the state school PA order is quite reasonable overall. I think I'd say the same for the top privates in the university class.</p>