Peer Assessment-Is this a useful tool in the College Selection process?

<p>I think that hawkette is dead on with the comment about peer assessment is not related to the quality of the education. Peer assessment is not education quality, it's the reputation of the school and the value of the degree. </p>

<p>People make way too big of a deal into differences in peer assessment. I see arguments where people use a peer assessment difference of .1 to their advantage, pretty ridiculous.</p>

<p>Honestly, if a school is within .4 of another, it's not going to help/hurt you. A 4.3 is not significantly better than a 3.9 in terms of reputation, a 3.5 is not significantly worse than a 3.9.</p>

<p>azwolves,
Maybe I'm splitting hairs here, but I would amend your statement about reputation and degree value. IMO PA is "the reputation of the school and the value of the degree" AMONG ACADEMICS. I don't believe that PA is a fair reflection of how business people judge the product that comes out of a school. </p>

<p>Let's consider an example for your school (Indiana U). Suppose you are an employer in Fort Wayne (eg, Lincoln Financial or trucking company Triple Crown Services). Do you really think that a graduate of Indiana U (PA of 3.8) would really be marked down in comparison to a graduate of Johns Hopkins (4.6) or UC Berkeley (4.7) or even Michigan (4.5) and Northwestern (4.4) because the school has a much lower PA? From a business person’s perspective, it is pretty close to irrelevant. Plenty of talented kids come out of Indiana every year (Wall Street knows this and hires from there in large numbers), but it is very doubtful that this talent gets the recognition and respect that it deserves from rankings like USNWR and others. </p>

<p>Beyond the PA, other quantitative measures command much greater regard in the business community, eg, standardized test scores or college GPA. Ask a business person in North Carolina about a student who earns a 97 average at Wake Forest and they will likely understand that this student must be really outstanding as Wake is renown for being a tough place for great grades. Conversely, compare this achievement to some of the Ivies where grade inflation (and possibly academic reputation inflation) is rampant. The same North Carolina employer may know the name of the Ivy college because of its historical prestige (which may or may not be currently justified), but the employer’s true knowledge of the school is limited. The employer is likely to fall back onto other evaluation methods (eg, standardized test scores), but almost certainly will not consider a school’s PA (and likely they have never even heard of PA) in their evaluation. Thus, for most students, what matters most (getting a job after graduation) is not impacted by PA. As a result and in addition to questions about the relationship between the PA and actual teaching in the classroom, I question its use in college rankings, particularly when so many in the status quo proclaim the PA as the holy grail for judging academic excellence.</p>

<p>
[quote]
the college presidents who are surveyed, probably their secretaries filling out the questionairre. (sic)

[/quote]
</p>

<p>Is that so? Now, it's possible that these "old geezers" may hand off the task to someone else, but honestly, what proportion do you believe entrusts such a survey to someone in the position of secretary or administrative assistant? </p>

<p>If a President has a problem with US News (and some of them do) the more likely outcome is that they won't fill it out. Period. It is highly unlikely to me that Presidents would deliberately sabotage a research effort (even one they didn't care about) by enlisting the aid of someone without adequate information to fill it out.</p>

<p>
[quote]
particularly when so many in the status quo proclaim the PA as the holy grail for judging academic excellence.

[/quote]
</p>

<p>This hasn't been my experience. Do you have some examples of people in the status quo who laud this measurement so highly?</p>

<p>I have to wonder how academics even know how good overall other schools are or if they're just guessing based on stereotypes, reputation, etc - i.e. Ivies get high ranks because they always have and it's just this big circle that keeps going and going and going. I'm kind of suspicious of that part of the rankings. Actually I don't trust the USNWR rankings altogether, but that part especially :-D</p>

<p>hoedown,
I appreciate that you "get it" when it comes to the interpretation and use of PA scores. If only that were the case across CC. I use the term "status quo' broadly to include those partisans here on CC who posit that X college is superior because of its PA score. They may be right that this score of an individual college is high, but they fail to put this into the context of what this means (or should mean) to the average high schooler trying to decide among various schools. PA likely has a proper application in the academic world, but in most of the for-profit world into which most students will ultimately enter, I would argue that it has very little to zero influence.</p>

<p>
[quote]
Is that so? Now, it's possible that these "old geezers" may hand off the task to someone else, but honestly, what proportion do you believe entrusts such a survey to someone in the position of secretary or administrative assistant?

[/quote]
</p>

<p>Look at some of the presidents. Some of them are practically used-car salesmen, picked for their alumni glad-handing abilities. </p>

<p>Others are, shall we say, less than fully in touch with the world around them. Can you really imagine Larry Summers having a knowledgeable opinion about a "peer assessment" for Mississippi State and the University of Southern Mississippi? How about a knowable opinion about science programs at Mt. Holyoke and Bryn Mawr?</p>

<p>It's been reported (for what that's worth) that people filling out the questionnairres mark down their rival peers, much in the way a Red Sox fan would rank the Yankees.</p>

<p>The survey is nothing more than a "conventional wisdom" ranking of presitge. Just like the ESPN guys ranking the NFL teams in preseason. It's fun. It sells magazines. It's useful when you don't already know "the conventional wisdom". It's even useful for plotting graphs -- for example, plotting peer assessement versus selectivity, looking for admissions "values". But, it should never serve as more than an initial shopping list for places to go kick tires.</p>

<p>
[quote]
Look at some of the presidents. Some of them are practically used-car salesmen, picked for their alumni glad-handing abilities . . .
Others are, shall we say, less than fully in touch with the world around them. Can you really imagine Larry Summers having a knowledgeable opinion about a "peer assessment" for Mississippi State and the University of Southern Mississippi? How about a knowable opinion about science programs at Mt. Holyoke and Bryn Mawr?....It's been reported (for what that's worth) that people filling out the questionnairres mark down their rival peers, much in the way a Red Sox fan would rank the Yankees.

[/quote]
</p>

<p>These are all issues worth discussing, but not a one of them has to do with your assertion that these things are probably filled out by secretaries.</p>

<p>The new concerns you bring up are legitimate concerns about the respondent pool. </p>

<p>Some of your complaints about the old geezers/used car salesmen is counterbalanced, one hopes, by the input of provosts (who are academics) and admissions officers (who may be just as unworthy in your eyes as some of these presidents, but who have a knowledge of school repuations among students, parents, and guidance counselors if nothing else).</p>

<p>As for your Summers example, I would point out that Summers isn't being asked to rate science programs; nor is he asked for his input on regional state institutions.</p>

<p>I have no doubt that sometimes respondents can't help dinging their rivals. But this bias has an equal chance of hurting any and every school; it does not disadvantage one over others. In other words, if this is widespread, reputational rankings are depressed across the board.</p>

<p>PA has much more to do with what academics think (which is highly tied to research) - which is quite different from what the "real world" perceives with regard to the overall quality of the student body.</p>

<p>Granted, top state schools like Cal have high PA scores and deservedly so - but at the same time - people, in general, aren't going to equate the student body at UNC, Wisc., or UT to be better (much less equal) to those at Rice, Vanderbilt or Emory, simply b/c the latter schools have lower PA scores.</p>

<p>It would also be ludicrous for people to think that students at universities, which aren't as research-oriented (Dartmouth, Brown, etc.), aren't "taught" as well as students at a research-oriented university with a higher PA score (say - UoM).</p>

<p>Hawkette:</p>

<p>I would agree completely that the PA is not well tied to the actual educational experience that undergrads have in the national universities (I'm not nearly as sure of that for the LACs). Personally, I think large state schools with high PAs probably don't provide as good an experience as some smaller schools with lower PAs, if those smaller schools are focused on undergrad education.</p>

<p>Now, about reputation and business. I've done much research in large organizations, and I don't think it matters where you go to school if you want to join a trucking company. Trucking companies are generally not looking for grads from elite schools. They can't afford them, and don't need them for their relatively simple, cost-sensitive business. Same with the railroads. I had a railroad manager tell me, once, that they looked for C+ students. They felt they couldn't get the A students, and the B students would get bored quickly and look for jobs elsewhere.</p>

<p>On the other hand, businesses that absolutely thrive on top talent don't see it that way. Their product/service generally depends on hiring the smartest, most motivated people they can find. If that's your business, you go looking for those people where you are most likely to find them. One thing you can be pretty sure of with a Harvard grad is that she is smart and hard-working or, if not hard-working, so incredibly smart that you can put up with that. That's not to say that very, very intelligent, very hard-working people can't be found elsewhere. They can be. But trying to find them at Yale is like mining a rich vein of ore. Trying to find them at the University of Indiana is like panning for gold. Both can be profitable, but one tends to pay off better.</p>

<p>It's Indiana University, not University of Indiana.</p>

<p>Hoedown,</p>

<p>I'm being a little facetious. Jeez, how 'bout a little literary license, here!</p>

<p>But, I think my point is valid. Let's say that you are a well-intentioned university president being asked to take time out your busy fundraising schedule to fill out the questionairre. You are looking at a list of 1000 schools. Are you really going to do the due diligence for alll 1000 schools to accurately rank schools you only know by name? Do you really care enough to figure out whether UMontana should be rated higer than UWyoming? Are you going to look up the PhD production rates? The med school placement rates? The student-faculty ratios. The ESS survey results? And, all of the other things that contribute to quality undergrad education? Or are you just going to wing it based on "reputation"?</p>

<p>Remember, the President filling out the forms at UMass for many years was Billy Bulger. Do a Google search.</p>

<p>As I say, you could probably get the same peer assessement scores by listing per student endowment with a little bump for "old money" and northeast location.</p>

<p>My advice. Use ANY of these lists as a rough starting point to focus in on an appropriate range for an individual student. Then, start researching schools. A 4.1 school may or may not be "better" than a 3.9 school. And, one 4.1 school may be great for a given student, while another 4.1 school would be terrible. It all depends on how the given student prioritizes the strengths and weaknesses that all colleges and universities have.</p>

<p>I used the lists to build a graph plotting "conventional wisdom (i.e. peer assessement) versus admissions selectivity. That plot is a great way to identify pariticularly good (or bad) admissions values. In theory, as peer assessment goes down, odds of admission should go up. But, it doesn't always work that way.</p>

<p>interesteddad,
Are you able to post your graph here? I'd be interested to see how it looks.</p>

<p>No. It was a really personalized list of schools my daughter was considering, so it wouldn't really be of much use for anyone else.</p>

<p>
[quote]
I'm being a little facetious. Jeez, how 'bout a little literary license, here!

[/quote]
</p>

<p>From what I have observed on this forum, readers are almost likely to put faith in an off-the-cuff remark as one posted with a concern for accuracy. I like this place best when it's squelching the craziest of the rumors. Saying presidents ask their secretaries to fill them out sounds (to me) like starting a new one.</p>

<p>
[quote]
You are looking at a list of 1000 schools.

[/quote]
</p>

<p>This contradicts the statements that administrators are asked to rank their peers. Are you sure anyone is asked to rank 1000 institutions? Or even 500? I don't know how much license you are using here, but these numbers imply you don't believe that the assessment lists are targeted towards institutions of a like focus and/or region. </p>

<p>
[quote]
Are you really going to do the due diligence for alll 1000 schools to accurately rank schools you only know by name? Do you really care enough to figure out whether UMontana should be rated higer than UWyoming? Are you going to look up the PhD production rates? The med school placement rates? The student-faculty ratios. The ESS survey results? And, all of the other things that contribute to quality undergrad education? Or are you just going to wing it based on "reputation"?

[/quote]
</p>

<p>I think "reputation" is exactly what they are getting at, and that is why people who look at that ranking should take it with a grain of salt as you and many others have already asserted. </p>

<p>The model you are describing here sounds to me like you think presidents and provosts should gather staggering amounts of quantitative information, put it into a formula, and give a rating that way. Isn't that what the rest of the USNews ranking aims to do? If that's what is the appropriate method is, why ask presidents to do it at all? Have a computer do it. </p>

<p>You seem to have a very low opinion of presidents. While juggling that busy fundraising schedule, some of them also read the Chronicle and Currents. They hear from deans and faculty. They attend meetings attended by people from other institutions. They hear what schools are known for innovative programs, which is luring their faculty away, which just nailed a big grant for undergraduate education, which was just featured in the latest tome about campus leadership. This kind of knowledge isn't perfect and it isn't complete, but it gives presidents a more thorough view of their peer institutions than you credit them for having. </p>

<p>I am sure there are some very bad apples in the presidential bunch, and there are institutions who do not get the ratings they deserve. But you paint a pretty insulting picture of presidential participation in the survey. I am not sure that is fair, nor do I think it has quite the deleterious effect on the PA ranking that you're suggesting.</p>

<p>I'd like to comment on interesteddad's NFL comparison.</p>

<p>"Conceptually, the "peer assessment" is no different than sportswriters ranking the teams in the NFL during preseason. It's a bunch of old geezers just making it up"</p>

<p>It may seem that way, but there are some profound differences that I think, along the lines of hawkette's reasoning, make PA a self-reinforcing measure.</p>

<p>The old geezers in the NFL are predicting based upon past results updated with new information: draft picks, trades, retirements, injuries. While their predictions may be drastically wrong, they are quickly corrected from year to year to reflect the results of the most recent season. The task of predicting pigskin performance is far narrower, though fraught with peril, than that of assessing peer performance among universities and colleges. It is a manageable task to follow 32 teams that play a total of 267 games during a season. OTOH, in the words of hawkette, "My sense is that Presidents, Provosts & Deans of Admissions must have a very difficult time keeping current on the hundreds of schools nationwide and their thousands of departments and what those departments mean individually and collectively for each school."</p>

<p>The PA old geezers are not predicting, they're looking strictly at the past. The assumption is that the near future will be little different from the past. The PA old geezers don't have good objective results (win-loss record) to look at, unless they are looking at the same stats that USNWR is already factoring in. So basically, USNWR is circumscribing the subjective measure of a university through peer assessment and assigning it a quantitative value. Once an objective measure exists, it becomes easier to rely upon the existing measure than to go through the whole process of making a subjective accessment from square one each iteration. In this sense, weekly college basketball polls more closely mimic the behavior peer assessment. Once initial standing is determined, everything become relevant to that list and its updated successor. Seeing as how there are no wins and losses to move schools up or down, what events warrant a school's promotion or demotion on the list and would those events be universally regarded as positive or negative? Was Summers' removal at Harvard a good thing or a bad thing for the operation and reputation of that institution? A case could be made for either view. (Is who heads up Harvard even consequential to its robust reputation?)</p>

<p>Universities evolve at such a rate that reputation, for better or worse, lags behind, often far behind, what is at work on a particular campus. Unlike sports where the game is player on a weekly basis, institutions of higher education are measured a year at a time: one admissions event, one graduation event (for the most part), and one USNWR ranking. Yet there is a better way, and it too occurs on an annual basis: hundreds of thousands of high school kids researching schools to find the best fit for themselves.</p>

<p>I find 2 assumptions that some of you "vet cc'ers" are making very surprising.</p>

<p>1) every university president is filling out an individual form about 1000 separate schools.</p>

<p>If a president is filling out forms - he's most likely limited at least to his group (i.e. national universities, lacs, regional school, etc.). Even if he isn't limited to these things, president's aren't going to just making up numbers for 1000 schools. They fill out the ones they know something about.</p>

<p>2) presidents arnt well informed about other schools. I don't know how many of you have ever had a conversation with a university president - or had a Q&A session with one in a classroom setting, but I have. A former UVa president had a Q&A session with one of my classes last semester. This is what I can tell that University presidents are concerned with. 1) getting adequate funding 2) keeping up with "peer universities." </p>

<p>Thats all a president really does. Find ways to get money - and make sure that his school is at least keeping up with its "peer universities." For him to "keep up" with peer universities, he needs to know about them.</p>

<p>
[quote]
This contradicts the statements that administrators are asked to rank their peers. Are you sure anyone is asked to rank 1000 institutions? Or even 500?

[/quote]
</p>

<p>I'll look it up sometime, but I believe that LAC presidents "rank" all 400 LACs (or whatever the number is). All "National University" professors rank all "national universities".</p>

<p>This process is not without merit, but it's simply a coarse compilation of "conventional wisdom". It's quite circular with the historic USNEWS rankings and the per student endowment with a fudge factor applied for "old money" versus "new money" and northeast location over midwest, south, or west coast.</p>

<p>I'm not knocking the university presidents (except for Billy Bulger -- did you Google him?) It's just that the assigned task would be too immense to approach systematically. How would you do it with a list of 500 schools? I know what I'd do. Grab last year's USNEWS.</p>

<p>interesteddad:</p>

<p>Actually, I think the nationals rank a bit over 200 schools, but your point is still well taken that they cannot possibly do research on each. On the other hand, I don't think it's necessary to do research when the US News rankings already sort of do that. If all presidents did all that research, then the reputation rankings would just mirror the other factors used in the rankings, which would make them fairly useless.</p>

<p>I think one thing that's missing here is a discussion of the scaling technique involved. Remember that people are not giving rankings like "4.3" or "3.7." They are giving rankings on a five-point scale with 5 being "distinguished" and 1 being "marginal." I don't think it's so very difficult to make a five point Likert scale distinction, especially if you leave blank ratings of those schools for which you have absolutely no "feel" at all.</p>

<p>I'm going to guess that the vast majority of schools get 3s, since that is the midpoint that would suggest "average." Only those schools that are known to the rater as being particularly good or particularly bad would generally be assigned higher or lower numbers.</p>

<p>So, if I am sitting down to rate, I probably give 5s to HYP, MIT, CalTech, and Stanford pretty much by default. Those schools have sterling reputations and I cannot imagine not designating them as "distinguished." Aftter that, I need to make some judgment calls and will probably make up some sort of internal criteria for when to pick a 5 and when to pick a 4. For instance, I'd probably give Chicago, Columbia, Cornell, and Duke 5s because they are well known for having extraordinary faculty and either an instructional system or a reputation for focus on undergrads. Based on the actual assigned PA numbers, others would disagree with me.</p>

<p>In the end, I think the PAs are probably about right, within a few tenths, for most schools, if measuring sheer reputation.</p>

<p>tarhunt,
But are these PA scores even relevant to most of the college stakeholders (students, families, alumni, employers)? A very large majority of the aforementioned stakeholders definitely care enormously about post-graduate financial opportunities and/or achievements. While I concur that PA is useful among academics, my impression is that faculty are not really interested in, motivated by and compensated based on the things that are most important to the vast majority of the students, families, alumni, employers. I think understanding how well a school prepares its students for work in the real world is critical, but IMO the current PA does nothing to address that. Do you see this being reflected in a PA score? Should it?</p>

<p>hawkette:</p>

<p>I think you may be jumping towards some unsupported conclusions when you use words like "vast majority." If you have some studies supporting that assertion, then I will concur but, until then, I'll have to take it with a large grain of salt.</p>

<p>Everyone cares about eventual careers, but if the "vast majority" care only about that, one would expect that there would be very few liberal arts majors left in the country, and that is clearly not the case. It may well be the case with you, and that is fine. </p>

<p>My experience with employers (to whom I have consulted on organizational behavior issues for many years) is that they are looking for people who are both hard working and very, very good thinkers. Add high creativity to that, and you have the perfect employee. There are certain discrete skills that are very much in demand, as well, such as accounting and engineering skills, but most other things can be taught on the job.</p>

<p>In that sense, the PA may reflect, at least in part, what employers are looking for. If we assume (and it has to be an assumption since we lack data) that those fillling out the PA have some sort of internal ranking system that includes such things as the quality of the faculty, the student body, focus on undergrad education, and the like, then it's quite possible that the PA has a strong relationship to the sort of employee that employers want.</p>