Peer Assesment Rank

<p>
[quote]
The thing is, most of the factors included in the ranking aren't directly related to the quality of eduction at respective colleges anyway. At least peer assessment, while subjective, is directly related to the purpose of the rankings. I'd argue that student selectivity is a valid predictor of academic quality (although I'd quibble with "acceptance rate.") But "Alumni giving?" "Class sizes?" Just how directly related to academic quality are those things? And are things like graduation rate and retention even remotely valid distinctions between top 20, or even top 50 schools? "Financial resources" includes money spent on research, which may or may not be relevant to undergraduate education, etc. A lot of the factors included are present based on the assumption that they will - indirectly - affect overall academic quality. But peer assessment is a direct opinion of the success of the school in actually . Imperfect, but valuable in its own right.

[/quote]
</p>

<p>The benefit of USNews has little to do with the final rankings the magazine provides. Most everyone who has a modicum of familiarity with the methodology knows that USNews weights are prone to be --rightfully-- criticized. That is why people do buy the online version that allows one to sort the rankings according to his or own set of criteria, hopefully based on a correct understanding of the criteria. </p>

<p>However, when it comes to the final ranking, it is absolutely uncalled for to have the peer assessment dwarfing ALL other statistics and introducing abject manipulation and cronyism in an otherwise data driven survey. </p>

<p>I have repeated this before, but the ONLY way to infuse a bit of credibility to the Peer Assessment would be to make it COMPLETELY viewable by the public. Let the world see who is responsible for UMich and UC-Berkeley scores, and let everyone see how Wellesley and Mount Holyoke evaluate their "peers." And for what i is worth, a conclusive statement by the school official that HE or SHE has completed the survey and stands behind the numbers would not hurt for accountability. </p>

<p>Since there is a movement by the schools to steal USNews' thunder, one could hope the colleges will seize the opportunity to (re)gain the trust of the students by erring on the side of full and verifiable disclosure. While the public at large likes the People Magazine Most [fill tyhe blank] type of reporting and checks the rankings, students are or should be more interested in the specific details.</p>

<p>
[quote]
I alway maintained that the PA and the statistics measure two very different things. The former measures reputation and pure academic strengths, the latter answers questions more pertinent to individual preference.

[/quote]
</p>

<p>That is why the two should not be merged in one single measurement, especially since the qualifications and integrity of the "measurers" of "reputation and pure academic strengths" are not clearly established ... to say the least.</p>

<p>If you don't know by now why UCB and UM do pretty well in PA, there is no hope for you actually understanding how higher education works.
Virtually everyone actually working in higher education holds both of these schools in high regard as do many top employers.
If that puts your nose out of joint--too bad.</p>

<p>Xiggi, a handful of individuals may be faulted with lack of integrity, ignorance and isignificance. I am not sure we can come to such a conclusion with the hundreds, if not thousands of "measurers" who make up the Peer Assessment Score. </p>

<p>"I have repeated this before, but the ONLY way to infuse a bit of credibility to the Peer Assessment would be to make it COMPLETELY viewable by the public. Let the world see who is responsible for UMich and UC-Berkeley scores"</p>

<p>Pray tell, what's wrong with Michigan and Cal? Do you feel they are unworthy of their PA? Care to elaborate as to why you feel Michigan and Cal should be rated differently? Are you insinuating that somehow, magically, Michigan and Cal are unfairly favored by the majority of Peer Assessment voters?</p>

<p>
[quote]
I like the idea of having a separate list. Sort of like saying: "here are the numbers" (even if many of the numbers are unnecessary) and "here's the PA."

[/quote]
</p>

<p>Exactly. If the PA is such an important number, make it available to the public but exclude it from the formula or greatly nullify its importance.</p>

<p>
[quote]
Xiggi, a handful of individuals may be faulted with lack of integrity, ignorance and isignificance. I am not sure we can come to such a conclusion with the hundreds, if not thousands of "measurers" who make up the Peer Assessment Score. </p>

<p>"I have repeated this before, but the ONLY way to infuse a bit of credibility to the Peer Assessment would be to make it COMPLETELY viewable by the public. Let the world see who is responsible for UMich and UC-Berkeley scores"</p>

<p>Pray tell, what's wrong with Michigan and Cal? Do you feel they are unworthy of their PA? Care to elaborate as to why you feel Michigan and Cal should be rated differently?

[/quote]
</p>

<p>Alexandre, inasmuch as I WILL answer about the PA for Cal and Michigan, let me point out that my point was NOT to belittle those schools in an indirect way. The issue relates to the possible elimination the anonymous and undisclosed characteristics of the survey that currently permits the current manipulation. As far as a handful of rogues among THOUSANDS of measurers, would it not be worth checking the methodology behind the definitions of "peers" After all, I doubt that somebody at Cal is asked to evaluate Lane College or Hillsdale. A few voices are not lost in a sea of thousands! Again, let's everyone see how those PA are answered and by ... whom!</p>

<p>Now, about the PA, here's my answer, which does not require much more elaboration than reproducing your list partially:</p>

<p>
[quote]

  1. Massachusetts Institute of Technology 4.9
  2. California Institute of Technology 4.7
  3. University of California-Berkeley 4.7
  4. University of Michigan-Ann Arbor 4.5
  5. Rice University 4.1
  6. Washington University-St Louis 4.1

[/quote]
</p>

<p>Based on the fact that the PA is supposed to address the undergraduate body EXCLUSIVELY and NOT compare graduate schools, could you please tell me why Cal is lodged squarely between MIT and Michigan, but WELL ahead of say Rice and WUSTL as far as the quality of undergraduate education goes?</p>

<p>PS According to your quote, "The peer assessment survey allows the top academics we consult—presidents, provosts, and deans of admissions—to account for intangibles such as faculty dedication to teaching." Yes, teaching, not research nor sabbaticals!</p>

<p>Whatever the PA is "worth" - I don't think there is any doubt that it is more about a school's graduate research ranking than anything else.</p>

<p>Research is the lifeblood of higher education. Without new discoveries there is no new knowledge and you might as well be using textbooks from the 1900's. Dedication to research goes hand in glove with up to date teaching of the latest body of knowledge--which is often out of date by the times text books are published. </p>

<p>The actual definition or PA does not mention "dedication to teaching"</p>

<p>"Peer Assessment. How the school is regarded by administrators at peer institutions. A school's peer assessment score is determined by surveying the presidents, provosts, and deans of admissions (or equivalent positions) at institutions in the school's category. Each individual was asked to rate peer schools' undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished). Those individuals who did not know enough about a school to evaluate it fairly were asked to mark "don't know." A school's score is the average score of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. The survey was conducted in the spring of 2006, and about 58 percent of those surveyed responded.</p>

<p>
[quote]
Research is the lifeblood of higher education. Without new discoveries there is no new knowledge and you might as well be using textbooks from the 1900's. Dedication to research goes hand in glove with up to date teaching of the latest body of knowledge--which is often out of date by the times text books are published.

[/quote]
</p>

<p>True - but that "cutting edge" usually doesn't trickle down to undergrad students til later on.</p>

<p>Besides, if that was the case, we wouldn't be considering LACs like Amherst as highly as we do.</p>

<p>Barrons, are you accusing Alexandre to doctor his quotations, or are you intimating I should not rely on your PA's defender compadre and check his owm souces?</p>

<p>


</p>

<p>


</p>

<p>
[quote]
I actually agree with KK on this one. I alway maintained that the PA and the statistics measure two very different things. The former measures reputation and pure academic strengths, the latter answers questions more pertinent to individual preference.

[/quote]
</p>

<p>this is one of those debates where no one is really going to listen to what anyone else says, but i strongly, strongly disagree with the assertion that peer assessment measures "pure academic strengths"</p>

<p>peer assessment is a manipulable, arbitrary conferral of generalized impression based on the vagaries of hearsay and speculation within the industry</p>

<p>and, as already noted, has everything to do with research output and prominent graduate faculty, factors which would be hard-pressed to have less to do with undergraduate experience.</p>

<p>Oh well, this is a link for Barrons:</p>

<p><a href="http://www.usnews.com/usnews/edu/college/rankings/about/07rank_brief.php%5B/url%5D"&gt;http://www.usnews.com/usnews/edu/college/rankings/about/07rank_brief.php&lt;/a&gt;&lt;/p>

<p>
[quote]
How We Do the Rankings
By Robert J. Morse and Samuel Flanigan</p>

<p>Just how can rankings help you identify colleges and universities that are right for you? Certainly, the college experience consists of a host of intangibles that cannot be reduced to mere numbers. But for families, the U.S. News rankings provide an excellent starting point because they offer the opportunity to judge the relative quality of institutions based on widely accepted indicators of excellence. You can compare different schools' numbers at a glance, and looking at unfamiliar schools that are ranked near schools you know can be a good way to broaden your search.</p>

<p>The U.S. News rankings system rests on two pillars. It relies on quantitative measures that education experts have proposed as reliable indicators of academic quality, and it's based on our nonpartisan view of what matters in education. First, schools are categorized by mission and, in some cases, by region. </p>

<p>The national universities offer a full range of undergraduate majors, plus master's and Ph.D. programs, and emphasize faculty research. The liberal arts colleges focus almost exclusively on undergraduate education. </p>

<p>Next, we gather data from each college for up to 15 indicators of academic excellence. Each factor is assigned a weight that reflects our judgment about how much a measure matters. Finally, the colleges in each category are ranked against their peers, based on their composite weighted score. </p>

<p>Following are detailed descriptions of the indicators used to measure academic quality:</p>

<p>Peer assessment <a href="weighting:%2025%20percent">/B</a>. The U.S. News ranking formula gives greatest weight to the opinions of those in a position to judge a school's undergraduate academic excellence. The peer assessment survey allows the top academics we consult—presidents, provosts, and deans of admissions—to account for intangibles ***such as faculty dedication to teaching.* Each individual is asked to rate peer schools' academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who don't know enough about a school to evaluate it fairly are asked to mark "don't know." Synovate, an opinion-research firm based near Chicago, collected the data; of the 4,089 people who were sent questionnaires, 58 percent responded.</p>

<p>**Retention <a href="20%20percent%20in%20national%20universities%20and%20liberal%20arts%20colleges%20and%2025%20percent%20in%20master's%20and%20comprehensive%20colleges">/B</a>. The higher the proportion of freshmen who return to campus the following year and eventually graduate, the better a school is apt to be at offering the classes and services students need to succeed. This measure has two components: six-year graduation rate (80 percent of the retention score) and freshman retention rate (20 percent). The graduation rate indicates the average proportion of a graduating class who earn a degree in six years or less; we consider freshman classes that started from 1996 through 1999. Freshman retention indicates the average proportion of freshmen entering from 2001 through 2004 who returned the following fall.</p>

<p>**Faculty resources <a href="20%20percent">/B</a>. Research shows that the more satisfied students are about their contact with professors, the more they will learn and the more likely it is they will graduate. We use six factors from the 2005-06 academic year to assess a school's commitment to instruction. Class size has two components: the proportion of classes with fewer than 20 students (30 percent of the faculty resources score) and the proportion with 50 or more students (10 percent of the score). </p>

<p>In our model, a school benefits more for having a large proportion of classes with fewer than 20 students and a small proportion of large classes. Faculty salary (35 percent) is the average faculty pay, plus benefits, during the 2004-05 and 2005-06 academic years, adjusted for regional differences in the cost of living (using indexes from the consulting firm Runzheimer International). We also weigh the proportion of professors with the highest degree in their fields (15 percent), the student-faculty ratio (5 percent), and the proportion of faculty who are full time (5 percent).</p>

<p>*Student selectivity (15 percent). **A school's academic atmosphere is determined in part by the abilities and ambitions of the student body. We therefore factor in test scores of enrollees on the SAT or ACT tests (50 percent of the selectivity score); the proportion of enrolled freshmen (for all national universities and liberal arts colleges) who graduated in the top 10 percent of their high school classes and (for institutions in the universities-master's and comprehensive colleges-bachelor's categories) the top 25 percent (40 percent); and the acceptance rate, or the ratio of students admitted to applicants (10 percent). The data are for the fall 2005 entering class.
*

Financial resources (10 percent). **Generous per-student spending indicates that a college can offer a wide variety of programs and services. U.S. News measures the average spending per student on instruction, research, student services, and related educational expenditures in the 2004 and 2005 fiscal years. </p>

<p>**Graduation rate performance <a href="5%20percent;%20only%20in%20national%20universities%20and%20liberal%20arts%20colleges">/B</a>. This indicator of "added value" shows the effect of the college's programs and policies on the graduation rate of students after controlling for spending and student aptitude. We measure the difference between a school's six-year graduation rate for the class that entered in 1999 and the rate we predicted for the class. If the actual graduation rate is higher than the predicted rate, the college is enhancing achievement.</p>

<p>**Alumni giving rate (5 percent). **The average percentage of alumni who gave to their school during 2003-04 and 2004-05 is an indirect measure of student satisfaction. To arrive at a school's rank, we first calculated the weighted sum of its scores. The final scores were rescaled: The top school in each category was assigned a value of 100, and the other schools' weighted scores were calculated as a proportion of that top score. Final scores for each ranked school were rounded to the nearest whole number and ranked in descending order. Schools that receive the same rank are listed in alphabetical order. Our rankings of accredited undergraduate business programs and engineering programs are based exclusively on peer assessment data gathered from the programs' deans and senior faculty members.</p>

<p>How can you best use our rankings?</p>

<p>Mining the data for the information you need can definitely inform your thinking. The hard work is up to you.

[/quote]
:)</p>

<p>If you really think that research is the key to education and should determine how a college is ranked/graded, you might want to check with some non-academics to see if this view stands up. </p>

<p>IMO, academics often display a highly insulated (and inflated) view of the role that research plays in the educational experience and the world beyond. The reality is more likely that college research for a few technical areas can be quite important but, broadly speaking, its importance to the large majority of students is close to zero. And for alumni and employers, particularly in non-technical fields, the usefulness is pretty darn low. </p>

<p>As for the PA measure itself, who the heck knows what it measures. It means different things to different people, including those doing the grading, and almost certainly contains many uninformed, if not even malicious, opinions. PA is, without a doubt, the most corrupt and deleterious aspect of the USNWR survey.</p>

<p>I love how 5 is "distinguished", and yet no school has such a score. I really want to know what's on those surveys.</p>

<p>I guess different links for different folks.</p>

<p><a href="http://www.usnews.com/usnews/edu/college/rankings/about/weight_brief.php%5B/url%5D"&gt;http://www.usnews.com/usnews/edu/college/rankings/about/weight_brief.php&lt;/a&gt;&lt;/p>

<p>Undergraduate ranking criteria and weights
The U.S. News college rankings, published on usnews.com Aug. 18, 2006, are based on several key measures of quality, described below. U.S. News uses these measures to capture the various dimensions of academic quality at each college. These measures fall into seven broad categories: peer assessment; graduation and retention rate; faculty resources (for example, class size); student selectivity (for example, average admission test scores of incoming students); financial resources; alumni giving; and, only for national universities and liberal arts colleges, graduation rate performance. The indicators include both input measures, which reflect the quality of students, faculty, and other resources used in education, and outcome measures, which capture the results of the education an individual receives.</p>

<p>Scores for each measure are weighted as shown to arrive at a final overall score. For a more detailed explanation of the ranking indicators and methods, please read our methodology and our definitions of ranking criteria, below.</p>

<p>Ranking Category Category Weight Subfactor Subfactor Weight
National Universities
and
Liberal Arts Colleges Universities (Master's)
and
Comprehensive Colleges (Bachelor's) National Universities
and
Liberal Arts Colleges Universities (Master's)
and
Comprehensive Colleges (Bachelor's)
Peer assessment 25% 25% Peer assessment survey 100% 100%
Student selectivity (Fall 2005 entering class) 15% 15% Acceptance rate 10% 10%
High school class standing—top 10% 40% 0%
High school class standing—top 25% 0% 40%
SAT/ACT scores 50% 50%
Faculty resources (2005) 20% 20% Faculty compensation 35% 35%
Percent faculty with top terminal degree 15% 15%
Percent full-time faculty 5% 5%
Student/faculty ratio 5% 5%
Class size, 1-19 students 30% 30%
Class size, 50+ students 10% 10%
Graduation and retention rate 20% 25% Average graduation rate 80% 80%
Average freshman retention rate 20% 20%
Financial resources 10% 10% Average educational expenditures per student 100% 100%
Alumni giving 5% 5% Average alumni giving rate 100% 100%
Graduation rate performance 5% 0% Graduation rate performance 100% 0%
Total 100% 100% — 100% 100% </p>

<p>Weights for national universities and liberal arts colleges</p>

<p>This graph shows the relative weights assigned to each category of indicator for national universities and liberal arts colleges.</p>

<p>Weights for universities-master's and comprehensive colleges-bachelor's</p>

<p>This chart shows the weights assigned to factors used to rank the universities-master's and comprehensive colleges-bachelor's. Because graduation rate performance is not used to rank these groups, the graduation and retention rate variables receive a higher weight.</p>

<p>Definitions of Ranking Criteria</p>

<p>Acceptance rate. The ratio of the number of students admitted to the number of applicants for the fall 2005 admission. The acceptance rate is equal to the total number of students admitted divided by the total number of applicants. Both the applications and acceptances only counted first-time, first-year students.</p>

<p>Alumni giving. The average percent of undergraduate alumni of record who donated money to the college or university. Alumni of record are former full- or part-time students that received an undergraduate degree and for whom the college or university has a current address. Graduates who earned only a graduate degree are excluded. Undergraduate alumni donors are alumni with undergraduate degrees from an institution that made one or more gifts for either current operations or capital expenses during the specified academic year. The alumni giving rate is calculated by dividing the number of appropriate donors during a given academic year by the number of appropriate alumni of record for that year. These rates were averaged for the 2004 and 2005 academic years. The percent of alumni giving serves as a proxy for how satisfied students are with the school.</p>

<p>Average freshman retention rate. The percentage of first-year freshmen who returned to the same college or university the following fall, averaged over the first-year classes entering between 2001 and 2004.</p>

<p>Average graduation rate. The percentage of freshmen who graduated within a six-year period, averaged over the classes entering between 1996 and 1999. (Note: This excludes students who transferred into the school.)</p>

<p>Class size, 1-19 students. The percentage of undergraduate classes, excluding class subsections, with fewer than 20 students enrolled during the fall of 2005.</p>

<p>Class size, 50+ students. The percentage of undergraduate classes, excluding class subsections, with 50 students or more enrolled during the fall of 2005.</p>

<p>Expenditures per student. Financial resources are measured by the average spending per full-time equivalent students on instruction, research, public service, academic support, student services, institutional support, and operations and maintenance (for public institutions only) during the 2004 and 2005 fiscal years. The number of full-time equivalent students is equal to the number of full-time students plus one-third of the number of part-time students. (Note: This includes both undergraduate and graduate students.) We first scaled the public service and research values by the percentage of full-time equivalent undergraduate students attending the school. Next, we added in total instruction, academic support, student services, institutional support, and operations and maintenance (for public institutions only) and then divided by the number of full-time equivalent students. After calculating this value, we applied a logarithmic transformation to the spending per full-time equivalent student, prior to standardizing the value. This calculation process was done for all schools.</p>

<p>Faculty compensation. The average faculty pay and benefits are adjusted for regional differences in cost of living. This includes full-time assistant, associate, and full professors. The values are taken for the 2004-2005 and 2005-2006 academic years and then averaged. (The regional differences in cost of living are taken from indexes from Runzheimer International.)</p>

<p>Faculty with Ph.D.'s. or top terminal degree. The percentage of full-time faculty members with a doctorate or the highest degree possible in their field or specialty during the 2005-2006 academic year.</p>

<p>Graduation rate performance. The difference between the actual six-year graduation rate for students entering in the fall of 1999 and the predicted graduation rate. The predicted graduation rate is based upon characteristics of the entering class, as well as characteristics of the institution. If a school's actual graduation rate is higher than the predicted rate, then the school is enhancing achievement. This measure is only included in the rankings for schools in the National Universities and Liberal Arts Colleges categories.</p>

<p>High school class standing. The proportion of students enrolled for the fall 2005 academic year who graduated in the top 10 percent (for national universities and liberal arts colleges) or 25 percent (master's and comprehensive colleges) of their high school class. </p>

<p>Peer Assessment. How the school is regarded by administrators at peer institutions. A school's peer assessment score is determined by surveying the presidents, provosts, and deans of admissions (or equivalent positions) at institutions in the school's category. Each individual was asked to rate peer schools' undergraduate academic programs on a scale from 1 (marginal) to 5 (distinguished). Those individuals who did not know enough about a school to evaluate it fairly were asked to mark "don't know." A school's score is the average score of all the respondents who rated it. Responses of "don't know" counted neither for nor against a school. The survey was conducted in the spring of 2006, and about 58 percent of those surveyed responded. </p>

<p>Proportion of full-time faculty. The proportion of the 2005-2006 full-time equivalent faculty that is full time. The number of full-time equivalent faculty is equal to the number of full-time faculty plus one third of the number of part-time faculty. (Note: We do not include the following: faculty in preclinical and clinical medicine; administrative officers with titles such as dean of students, librarian, registrar, or coach, even though they may devote part of their time to classroom instruction and may have faculty status; undergraduate or graduate students who are teaching assistants or teaching fellows; faculty on leave without pay; or replacement faculty for those faculty members on sabbatical leave.) To calculate this percentage, the total full-time faculty is divided by the full-time equivalent faculty.</p>

<p>SAT/ACT scores. Average test scores on the SAT or ACT of all enrolled first-time, first-year students entering in 2005. Before being used as a ranking indicator, the scores are converted to the percentile of the national distribution corresponding to that school's scores.</p>

<p>Student/faculty ratio. The ratio of full-time-equivalent students to full-time-equivalent faculty during the fall of 2005, as reported by the school. Note: This excludes faculty and students of law, medical, business, and other stand-alone graduate or professional programs in which faculty teach virtually only graduate-level students. Faculty numbers also exclude graduate or undergraduate students who are teaching assistants.</p>

<p>
[quote]
I guess different links for different folks.

[/quote]
</p>

<p>Yep, especially when the notion that "faculty dedication to teaching" does not further your typical line of reasoning about the superiority of research over ... educating!</p>

<p>Anyway, shouldn't recognizing an incomplete source or failing to find the most appropriate one be part of good ... research?</p>

<p>Oh well, to each his own.</p>

<p>Robert Morse (in his Morse Code) also stated that "In terms of the peer assessment survey, we at U.S. News firmly believe the survey has significant value because it allows us to measure the "intangibles" of a college that we can't measure through statistical data. Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into. The peer survey is by nature subjective, but the technique of asking industry leaders to rate their competitors is a commonly accepted practice. The results from the peer survey also can act to level the playing field between private and public colleges."</p>

<p>What is interesting is that "Plus, the reputation of a school can help get that all-important first job and plays a key part in which grad school someone will be able to get into." is a very different element from what USNews actually measures. Discussing, let alone measuring, the future value or marketability of the degrees earned at the undergraduate level shines by its absence in the Best Colleges publication. </p>

<p>Now, is there any doubt about what "The results from the peer survey also can act to level the playing field between private and public colleges" really mean and WHY it represents 25% of the final numbers. In other words, when empirical data fails to yield the EXPECTED results, let's make sure we keep a category that will get us there! </p>

<p>Mr. Morse might come to bitterly regret his decision of using the internet to explain the positions of USNews in a platform that people might read and actually spend the time deciphering the Codespeak!</p>

<p>I do not think my source was incomplete at all as it was actually much more detailed than yours. The authors of yours may have been writing somewhat off the cuff. Mine was the more definitive source IMHO.</p>

<p>Everyone protests the PA but nobody really has any evidence it is WRONG. Just whining about how it does not measure effectiveness. Well, when somebody figures out how to measure effectiveness--holding all other variables out--that will be an accomplishment. </p>

<p>I don't think research effectiveness automatically means poor teaching. I happen to think it improves teaching. Do you have any factual proof to the contrary or just those unfounded opinions that top researchers cannot be good teachers?</p>

<p>Sure, Barrons!</p>

<p>Absolutely. You bet on the wrong horse.</p>