Peer Assessment-Is this a useful tool in the College Selection process?

<p>
[quote]
What exactly about the rankings bothers you?

[/quote]
</p>

<p>Oh, that is pretty simple:</p>

<p>The rankings do not bother at all. It is the inclusion of the PA that bothers me and the mixing of hard numbers with entirely subjective and probably false data.</p>

<p>The PA should be in a COMPLETELY separate category. This way, all the numbers that have some validity would represent 100% of the ranking.</p>

<p>USNEws could present a different ranking for the peer assessment to shine in all its baseless, unscientific, and manipulated splendor.</p>

<p>This way, everyone could pick its poison. For instance, Alexandre and all the U-something die-hard fans could dismiss the "other" ranking and wave the PA banner with abandon. Heck, the PA ranking could even be more based on elements that have little to do with UG than it is now! </p>

<p>Does that sum it up nicely? :)</p>

<p>So you have no interest in the overall faculty quality and educational program? Just tell me the average SAT and alumni donations and I'm done?????? The facts are that many of the U somethings represent some of the best collections of faculty and facilities in the world. They raid Cambridge and the like as well as the top LAC's to come to the fields of the midwest and teach because the pay is better and research support outstanding.</p>

<p>barrons,
While I empathize with your desire to have faculty as part of the evaluation of a school, you have so far failed to present arguments about why this consideration, as reflected by the Peer Assessment scoring system used by USNWR, is useful. I think nearly everyone would agree that the role and quality of faculty is critical in an undergraduate experience. To me, it is just that the PA seems so disconnected from trying to measure that. I appreciate that the measurements among academics of research grants, papers published, awards won, etc. have some value, but I would strongly contend that the value to the average undergrad is very decidedly less. </p>

<p>As it now works, PA tells us nothing about the quality of teaching in the classroom. PA tells us nothing about how well a school's faculty prepares its students for work after college (and this is particularly true in non-technical fields). If a high schooler or a parent or a corporate recruiter wants to evaluate the quality of a faculty, do you really think that all they should look at it is how many awards or grants were won by a school or by a certain department?? If that is your conclusion, then I respectfully suggest that you expand your considerations to include these important stakeholders who attend the classes, pay the bills and hire the graduates. </p>

<p>I hope that you and other defenders/promoters of the value of Peer Assessment scores will step up and explain why the current PA is relevant and should be used. I'd really like to understand why you and others feel this way.</p>

<p>If it is as important as many assume to have top students to associate with, I think it is obviously as important to have top professors to teach and associate with. Just as there is no certainty that someone with a high SAT score will turn out to be an interesting person to be around, it is no certainty that someone at the top of their field will be a great person too--but I'd prefer that over the alternative. It is a sure bet that a top person will know and be known by others at the top of the field when considering grad school for those all important recommendations. Even in business and other professional majors the top people are often known to people at hiring firms. How you measure that is open to debate but the PA rankings closely track all other more numerical attempts at measuring faculty quality/effectiveness. A JC prof might be a great teacher but a rec from him/her will not help you get to the next level.</p>

<p>barrons,
I appreciate your comments and your comparison of judging student body quality vs judging faculty quality, but I think you underrate the difference in the worth of the information that independent observers have in reaching their conclusions.</p>

<p>In building an undergraduate student body, the college admissions staff considers many variables within a total application, eg, high school transcript, standardized test scores, EC activities, recommendations, etc. Obviously, we only have access to part of this judgment- standardized test scores and high school GPA (although certainly this is on an irregular scale)- but these numbers are objective and at least tell some important and defining pieces of an applicant's scholastic record. </p>

<p>By contrast, the faculty evaluations, as represented by Peer Assessments, tell us….we don't know. Academics may interpret them as useful in judging who is successful in things like research grants won or papers published (and even that connection is not explicitly claimed by the people at USNWR or those who complete the surveys). And for potential grad school applicants, I suspect that your points have merit, but this is not a large percentage of the undergraduate population and why should that consideration be the variable that stamps a school's reputation or rank (positive or negative) among the general public? </p>

<p>As a parent and recruiter/employer, I see very little value or relevance in the Peer Assessment scores unless my child wants to pursue a career in academia. As a business person, I would much prefer a well-prepared student taught by an effective JC professor than a student coming from a "prestigious" university who was taught by a "prestigious" professor whose real interest was research. And this says nothing at all about the size of the class that the "prestigious" professor teaches or how many TAs he/she or the school uses to assist him/her in the presentation. </p>

<p>Students take the classes, (most) parents pay the bills, alumni (mostly) make the charitable contributions, and corporations (mostly) do the hiring of the students. IMO, if you want a true Faculty Assessment, students, parents, alumni and corporations should also have a hand in evaluating the faculty. Do you agree or disagree with this and why?</p>

<p>
[quote]
IMO, if you want a true Faculty Assessment, students, parents, alumni and corporations should also have a hand in evaluating the faculty. Do you agree or disagree with this and why?

[/quote]
</p>

<p>I'm not barrons, but my own reaction to the idea of parents evaluating faculty is negative. I don't know on what basis they would rate them.</p>

<p>The NSSE covers students. You can look at the job placement data for most publics online. As a student I most often found the prestige profs were also darn good in the classroom. They had better anecdotes and knew the subject like you know your own home. I had two profs so good they are both now dead but their lectures have been made available on CD.</p>

<p>hoedown,
On parental judgment of faculty quality, many might come to the same conclusion as you (perhaps even me). But parents are the best placed observers to judge the changes that they see in their child (intellectually, socially, emotionally, etc) over the normal four-year period of an undergraduate experience. While parents would not have direct knowledge of the professors, they will have direct knowledge of the final product (their child) from which they may be in a position to make an applied judgment to the quality of the education that the student received. Perhaps a reach, but IMO and given their financial investment, parents' interests are too often ignored in these discussions. </p>

<p>How about thoughts on students, alumni and corporations from you or anyone?</p>

<p>and barrons, do you believe it is appropriate to consider the opinions of these other stakeholders in evaluating the faculty of a school?</p>

<p>
[quote]

But parents are the best placed observers to judge the changes that they see in their child (intellectually, socially, emotionally, etc) over the normal four-year period of an undergraduate experience.

[/quote]
</p>

<p>Well, yes, parents do observe outcomes. However, I think some parents would have a difficult time separating the effects of a good faculty from simple maturation effects. My own parents are educators, but when I look back on our interactions during and just after college, it wasn't generally on a level that I think is conducive to drawing those kinds of conclusions. What they knew about faculty wasn't outcome-based but rather based on evaluative comments from me as to my classroom experience, and that information was secondhand. </p>

<p>Students do evaluate faculty. I can't speak to why USNews hasn't incorporated that into a PA rating, although perhaps they would have a valid concern with undergraduates' ability to give comparitive information. That's the same as with alumni, although I do think alumni are in a better position to judge how their on-campus experience translated into career, personal, civic, and educational attainments after graduation. Corporate employers can also provide some feedback on outcomes that suggest something about faculty, but again I'm not privy to USNews' outlook on it. It's not clear to me that corporations are the ones who hire most graduates but I defer to you because that's not really my area. </p>

<p>The thing is, the aspect of PA most seems to bother you (i.e. the University of Michigan's big boost from its peer assessment, or the way the Ivy League's reputations are perpetuated) probably wouldn't change if you asked alums about the faculty at their alma mater, or asked corporate recruiters what they thought. I think the latter vote with their feet, and whether they can accurately attribute positive outcomes to faculty or not, they appear to like what the "top" schools are producing.</p>

<p>
[quote]
parents' interests are too often ignored in these discussions.

[/quote]
It's true that parents aren't asked to be at the table by colleges when it comes to evaluating faculty, or by USNews when it comes to PA. Did you mean their interests are being ignored, or their input? What is the concern, here--that parents (as the check-writers) should get to influence aspects of the institution? Or that parents should get to influence what others think of the school? I want to make sure I understand your meaning.</p>

<p>I doubt many parents have a detailed knowledge of their kids college life--or even an accurate one. What if the college encourages them to break free of their parents control and pursue a different life than their parents--is that good or bad? My parents set foot on my campus twice--the first year to drop me off and at graduations.</p>

<p>I like some of the outcomes measures such as the numbers of grads going on to run companies, get PhD's, win Pulitzers, etc., make Who's Who lists and the like. While the students the school gets to work with has some impact on these you can draw some conclusions for comparable schools.</p>

<p>My thinking on parental interests is quite simple. Many parents will spend $100,000 or even much more over a four-year period to send their offspring to college. Was it worth it? Did their kids (and the parents indirectly) get their money's worth? Did the school teach the kid anything and prepare him/her for post-graduate life, whether it be in the corporate world, further academic work, etc? Now, obviously, it's hard to make a silk purse out of a sow's ear, but right now what responsibility do the faculty (or the school) have for the success of their graduates? Should they have any? You may be right that parents really can't make a very informed comment on faculty, but it is a provocative thought. It is an interesting question and introduces the idea of more direct accountability to the college and its agents (faculty) that says they have a job to do and they also have some skin in the game. </p>

<p>As for my frustrations with PA, they extend well past Michigan (although their shameless partisans seem to trumpet these results most loudly here on CC and isn't it interesting and quite illuminating that none have appeared here yet to debate the merits of PA) and Ivy League reputations. I am a big believer in objective data and transparency, but PA provides neither and that is my objection. It would not surprise me at all (at least in the short run) if new measurement methodologies were used to incorporate the views of other stakeholders and the results ended up being nearly the same. But at least that result would be based on quantifiable, verifiable, relevant data. Furthermore, such a result would be much easier for all of us (students, parents, alumni, recruiters AND faculty) to interpret and use and apply more effectively for our own purposes in judging the quality of various colleges. </p>

<p>Along these lines, I liked xiggi's suggestion to just break the PA off and do a separate ranking for this. This would give those schools with high PAs the opportunity to point to their high scores while not polluting the objective data that makes up the other analysis done by USNWR (although that would not solve the debates about how to weight those variables).</p>

<p>
[quote]
But at least that result would be based on quantifiable, verifiable, relevant data.

[/quote]
</p>

<p>It seems to me that information would be no more quantifiable or verifiable. It's still asking a specific population of respondents to voluntarily and comparatively rate institutions according to their opinions and experience. </p>

<p>What it adds is opinions of more people from some different perspectives, and I can see value in that. But I don't see how it solves these other specific concerns related to PA as it currently stands.</p>

<p>You are right-these are more opinions. I am not anti-opinion. I am anti-PA as it is currently constructed. The PA is opinions, but we don't know what the opinions are, who holds these opinions, what standards of measurement were used in the formation of those opinions, etc.</p>

<p>If you were to ask how this might work as applied to these other stakeholders, I envision a process where the questions are made known and summary data would be provided to the public disclosing things like sample size, response rate, etc. Then it is up to the consumer to make his/her own judgment about their importance and/or relevance to their own situation. </p>

<p>For example, you might ask the following questions in a student survey:</p>

<p>Rate your experiences on a scale of 1-10 with 1 being very poor and 10 being excellent:
1. How would you rate the quality of the faculty at your school based on their contributions to your experience in the classroom?
2. How would you rate the quality of the faculty at your school based on their contributions to your experience in a research setting?
3. How well did the faculty prepare you for your post-graduate career choice, ie, work or further academic study?</p>

<p>These are completely off the top of my head and obviously merit much greater thought and care in the wording, but I think you get the idea that students are being asked to quantify their perception of and satisfaction with the faculty interactions that the students experience at a school. The results are still opinions, but the data would be transparent and the ultimate judgments would be quantifiable, verifiable and (hopefully) relevant.</p>

<p>Like this?</p>

<p><a href="http://apa.wisc.edu/NSSE/2004_NSSE_report.pdf%5B/url%5D"&gt;http://apa.wisc.edu/NSSE/2004_NSSE_report.pdf&lt;/a&gt;&lt;/p>

<p>Or this</p>

<p><a href="http://apa.wisc.edu/Surveys/2006SurveySummary_Nov27.pdf%5B/url%5D"&gt;http://apa.wisc.edu/Surveys/2006SurveySummary_Nov27.pdf&lt;/a&gt;&lt;/p>

<p>barrons is right--the kinds of things you're talking about are the very things the Pew Charitable Trusts had in mind with NSSE. The project was, in part, an answer to the obsession with USNews and other rankings, and a sense that some important things were not being measured when it comes to institutional quality. </p>

<p>I know folks at NSSE strongly encourage prospective students to inquire about NSSE results when they are looking at schools.</p>

<p>USNWR is the McDonald's of college info: fast, a bit greasy, cheap, and readily available everywhere. As for quality ...</p>

<p>For a view on how USNews views the NSSE:</p>

<p><a href="http://csue.msu.edu/conf2005/presentations/panel_2_Morse.pdf%5B/url%5D"&gt;http://csue.msu.edu/conf2005/presentations/panel_2_Morse.pdf&lt;/a&gt;&lt;/p>

<p>barrons,
Thanks for those links. That is exactly the kind of data that I would like to see if I were an intrepid high schooler who really wanted to know what college students thought about various topics, including their perception of the faculty. However, as xiggi's link explains, the difficulty lies in the data accumulation and getting the cooperation of the 1362 schools that USNWR surveys. Only 8.8% voluntary participation makes for a pretty useless sample size. Are schools just scared to collect this data and make it publicly available because they're not sure or they are afraid of what it might show?</p>

<p>standrews,
I like your analogy on USNWR/McDonalds and see it the same way. But, to be a little kinder to USNWR, the product is pretty comprehensive, easy-to-use, and delivers readily accessible objective data that can help with some initial college screening. Over the grumblings of the academic elites, such publication will lead to greater availability of such data among high school students and families and will make higher education more competitive and more accountable. </p>

<p>But do you have any better ideas, particularly as they relate to Peer Assessment scoring?</p>

<p>hawkette,</p>

<p>"But do you have any better ideas, particularly as they relate to Peer Assessment scoring?" Yes, I have a few ideas.</p>

<p>1) Get a broader range of opinions. But that would be expensive, likely too expensive for USNWR to conduct on a yearly basis with their existing scope. USNWR is a for-profit news operation, not a reasearch institution, so I don't expect them to undertake projects that aren't cost justified. They do widen the circle of opinion for their law school PA (only 180 schools are involved) and include recently tenured faculty and opinion outside of academia. However there seems to be even more criticism of USNWR's law school rankings than for ugrad rankings.</p>

<p>2) The annual nature of the PA service baffles me. As this thread has pointed out, the institutions change quite slowly and the results vary imperceptibly from year to year. USNWR would have better PA data by surveying once every 3 years and broadening the opinion solicited. The people they ask for input may take the survey more seriously knowing that it's every 3 years and not just a perfunctory annual task to be handed off to an assistant. The other data, admissions, annual giving, etc., could be updated annually. Ah, but would the 3-year approach hurts sales? Probably, which is why the annual survey is undoubtedly preferred.</p>

<p>3) It would also help, as suggested in this thread, to separate the PA findings from the objective data. Consumer Reports takes this approach by separating ratings for various models of refrigerators from manufacturer reliability ratings for refrigerators that they gain from surveying their subscribers. The reader can get the product ratings and then factor in reliability however they want. Reliability isn't baked into an overall rating.</p>

<p>It is far easier to criticize USNWR's PA than to come up with an acceptable alternative. Nevertheless, I'll blast away, having given some token suggestions for improvement above. Xiggi's link on, "how USNews views the NSSE", shows the disingenuousness USNRW's PA. Page 6, which provides insight into USNRW's threshold including "something" in their rankings, reveals that their PA rankings fail to meet their own standard.</p>

<p>"Public data availability. It should be mandatory that results be reported and
available publicly; not on a voluntary basis."
USNWR doesn't make their PA data public. They make their murky findings public, but not the data itself.</p>

<p>The bottom line for me is that just because PA is all USNWR offers doesn't mean it has value. I don't think it does and I will stand on that assessment until they make their data public. As such, improvements to USNWR's PA may not make it useful. Putting news tires on a car with a blown engine adds some value, but the car is still unsuitable for the task for which it is intended.</p>

<p>
[quote]
Only 8.8% voluntary participation makes for a pretty useless sample size. Are schools just scared to collect this data and make it publicly available because they're not sure or they are afraid of what it might show?

[/quote]
</p>

<p>Well, how much do you know about NSSE? I think that's an overly cynical view.</p>

<p>First of all, NSSE itself encourages colleges to participate on a 3-year cycle. The total is now closer to 600 schools participating each year, but each year it's a different set of schools. So the math USNews did is true for an individual year, but in any 3-year period the total number of schools is closer to 3 times that.</p>

<p>I don't think "fear" is the primary motivating factor behind not releasing the data to US News. Colleges pay to participate in NSSE and its primary value is for them to use it to make internal improvements to programs and to the student experience, to know where they're doing well and where there is room for improvement. What's the incentive to turn this data over to USNews?</p>

<p>Another concern with the data--and I think a disincentive to just turn it over to be published in someone else's rating system--is the problem with using the data out of context. Some of the concepts involved with "engaged" learning (as NSSE defined it) are good measures of social science and humanities teaching and learning, but aren't very good fits for hard sciences and engineering. In some of those fields, there simply aren't going to be a lot of papers written or books read, and there is going to be a lot of memorization--all things which NSSE scores have a somewhat normative stance on. So a school like RPI, for example, is going to have a academic benchmark score that could be poorly interpreted. NSSE made some recent changes that help with this issue, but it's still the case that some results need context to be understood, and I don't think institutions have any reason to believe USNews will provide it. </p>

<p>As I noted before, NSSE believes that study results have an informative role to play in helping perspective students. They have a small booklet they hand out to students encouraging them to ask these kinds of questions, and a number of colleges do report their benchmark scores. Some report a lot more.</p>