<p>Now that the 2008 USNWR have been released, I thought it might be useful to debate the value-added of the data provided. With credit to simba who came up with this idea, please provide your opinions of the following:</p>
<li><p>Peer Assessment Score (25% of total score). According to USNWR, this number is supposed to “ allow the top academics we consult-presidents, provosts, and deans of admissions-to account for intangibles such as faculty dedication to teaching. Each individual is asked to rate peer schools’ academic programs.” </p></li>
<li><p>Graduation & Retention rank (20% of the total score)
a. % of Freshman Retained (4%))
b. % of students that graduate in 6 years (16%)</p></li>
<li><p>Graduation Rate Performance Differential measurement of the difference between expected and actual graduation rates over a 6-year period (5% of the total score)</p></li>
<li><p>Faculty Resources rank (20% of total score)
a. % of classes with 50 or more students (2%)
b. % of classes with 20 or fewer students (6%)
c. Student/faculty ratio (1%)
d. % of faculty that are full-time (1%)
e. Faculty Salary (7%)
f. % of Faculty with terminal degree in their fields (3%)</p></li>
<li><p>Student Selectivity (15% of total score)
a. Standardized Test scores (7.5%)
b. % of students who are Top 10% (6%)
c. Acceptance Rate (1.5%)</p></li>
<li><p>Financial Resources rank (10% of total score) which is the average spending per student on instruction, research, student services, and related educational expenditures in the 2005 and 2006 fiscal years. Spending on sports, dorms, and hospitals doesn’t count, only the part of a school’s budget that goes toward educating students. </p></li>
<li><p>Alumni Giving Rate (5% of total score)</p></li>
</ol>
<p>Here is my personal view:
Peer Assessment: Interesting and most useful to those intending a career in academia-otherwise, overrated and what it means is often distorted by proponents
Graduation & Retention rank: Overrated, particularly for freshman retention
Graduation Rate Peformance: Potentially very useful and perhaps the best indicator of an institution’s ability and efforts to get its students out on time and the new consideration of Pell grantees makes this a more valid comparison
Faculty Resources: enormously useful and has highest impact on the nature of the academic experience a student will encounter
Student Selectivity: very important and possibly the single biggest factor in an undergraduate college education
Financial Resources: along with Faculty Resources, this is the other critical measurement for showing what a student will encounter when they actually get on campus
<p>Student selectivity and peer assessment contribute the most to prestige.</p>
<p>Graduation and retention rate, faculty resources, and financial resources contribute the most to a great college experience. This is why a student can have a great college experience at a less selective school.</p>
<p>Alumni giving is useful only in that it means there are rich and loyal alumni. This implies better networking for jobs upon graduation which can be very important.</p>
<p>Faculty salary can be skewed for areas where the cost of living is much higher. I imagine the same person would be paid a higher amount at a college in New York City than they would in Texas. Seems like this would bias in favor of the northeast/California.</p>
<ol>
<li><p>Peer assessment score. I agree with Hawkette. This is only important to students who intend on going on to graduate school. But I am not sure I agree with her on how significant (or in your case, insignificant) that is. Most students who attend top universities intend on going to graduate school. I think going to a university that is highly regarded by academe greatly enhances one's chances of admission into good graduate programs. It would be cool if the USNWR conducted a similar survey for corporations.</p></li>
<li><p>Graduation rates are important, but whether a school graduates 85% or 95% shouldn't make too much of a difference.</p></li>
<li><p>Graduation Rate Performance is also not that important as far as I am concerned.</p></li>
<li><p>Facutly resources. This is important, but very easily manipulated. In 2006only 40% of Cornell's classes had fewer than 20 students and a wopping 20% of Cornell's classes had more than 50 students. In 2007, 64% of Cornell's classes had fewer than 20 students and 11% had more than 50 students. Johns Hopkins had a similar leap from 2007 to 2008. Such leaps prove how universities can easily manipulate those stats. When one looks more closely, one can notice how some universities include medical and law school professors in their faculty:student ratio. 70%-80% of classes at every single top 25 university has fewer than 30 students and 8%-16% of classes at every single top 25 university has over 50 students. Faculty salaries favor schools in expensive areas (NE and Weast Coast) and schools where a large ratio of the total faculty belong to professional programs, where salaries tend to be high. Truly, when all the manipulating is removed from this equation, very little separates top universities where faculty resources are concerned.</p></li>
<li><p>I agree that this is important, though I am not sure it is THE most important factor. We chose the company we keep and typically, students of equal calibre and inclination move in the same circles.</p></li>
<li><p>This matters, but state schools, which are already very heavily discounted, should be ranked separately from private universities, which are far more expensive.</p></li>
<li><p>I agree, this really has to go.</p></li>
</ol>
<p>My husband and I were just discussing this last night. We were trying to come up with a "better" system in our minds but it is tough to do.
We both agreed that many of these points are automatically biased against public schools as their mission is generally to provide educational opportunities to many. This means that the graduation rates, alumni giving, spending per student, and selectivity are all going to be lower. Does this mean they don't do a good job ( or even a great job ) of educating students?
I don't think so.
As far as peer assessment goes....I personally don't think it is a very good indicator, and it seems like it would be difficult for a lesser known excellent school to get much credit. IMHO 25% is way too much weighted for something so subjective.
Ahhh, I never paid much attention to the rankings anyway , but I do find it an interesting topic.</p>
<p>
[quote]
Alumni giving is useful only in that it means there are rich and loyal alumni. This implies better networking for jobs upon graduation which can be very important.
[/quote]
USNWR only measures the number of alumni donors but not the amount donated. That is, an alum donating $10 is counted the same as one who donated $10,000,000. I agree this one has to go.</p>
<p>FYI-For those commenting on the role that faculty salary plays in the calculation, USNWR states that it makes a cost of living adjustment so this metric should not be a geographically biased number. </p>
<p>alexandre,
For the Graduation & Retention ranking numbers, very small differences can have enormous ranking consequences. A difference of 10% (eg Stanford at 95% and Penn State at 85%) is a difference in ranking of 5th vs 34th. Even for a smaller difference, say 5% (eg, Stanford at 95% and U Chicago at 90%) produces a ranking difference of 5th vs. 22nd. Like you, this ranking impact strikes me as a little too dramatic. </p>
<p>The results are even more acute when you consider the Freshman Retention measurements. For example, a 98% Freshman Retention record ties a school for 1st while a 96% ties a school for 21st. For a drop of 5% from 98% to 93% produces a decline in ranking by this measurement from 1st to 39th. For a metric that determines 4% of the total score, this is IMO much too extreme.</p>
<p>GoBlue81,
Do you know how they do the calculation? I hope you are right because if done by ranking, the small differences explode into large ranking differences.</p>
<p>Re Penn State, I just chose them because they have an 85% grad rate which alexandre referenced earlier. But you are right about their whopping advantage in the over/under Graduation Rate Performance measurement (which is a separate 5% weighting). Their +22 score dwarfs nearly every other school and no other Top 50 school even makes it to double digits.</p>
<p>That is why the under/over performance is one of the worst criteria of the USNews and an open invitation to more gamesmanship. A school could benefit from underreporting its selectivity numbers and enjoy a nice boost in its overperformance. At a minimum it goes a long way in eliminating the value of having a selectivity index. This fact has not escaped the schools that "can" fiddle with their SAT reports as a result of optional policies.</p>
<p>Schools such as Harvey Mudd are savagedly punished for having the most selective group of students and one of the hardest engineering programs in the country. Schools with less selective admission criteria and generous credit/graduation policies are ... nicely rewarded.</p>
<p>xiggi,
I need some educating because I don't fully understand your criticisms of the over/under and how this relates to selectivity, particularly in consideration of the fact that the selectivity measures have a higher weight in the methodology. How would a college benefit from sandbagging on the selectivity measures without that hurting them in that area?</p>
<p>Hawkette, at the LAC level, compile a list of schools that are ranked 1-10 in selectivity and check their over/underperformance. Then compile a list of schools that are ranked higher in the overall scores than their selectivity might indicate ...</p>
<p>The patterns will be quite visible. </p>
<p>If you have past issues of the USNews, check what happened in the past 3-4 years for the following schools: Swarthmore, Pomona, Harvey Mudd, Smith, Wellesley, and especially Middlebury.</p>
<p>PS Check the total points for the sub-categories of the selectivity criteria and the categories influenced by graduation rates.</p>
<p>The Peer Assessment is purely subjective and nothing more than an opinion survey. We need to keep in mind that these flawed rankings are directed towards students coming from high school and going into undergraduate programs. Unfortunately, PA does not not measure the quality of undergraduate education.</p>
<p>The scores assigned by PA will frequently have more to do with the reputation in research of a college or university, the amount of money it gets in federal funds and the scientific publication record of its faculty. This score contributes the largest proportion of the total score, yet again it may have very little to do with the quality of teaching and learning for undergraduates....and still you have the subjectivity of it all.</p>
<p>If you really think about it, the fact that the survey probably measures "research reputation" rather than "teaching quality", suggests that a "good" score in this category may NOT be good for many students. Unfortunately, faculty who spend more time on research, may have less time (and interest ! ) for teaching undergraduates.</p>
<p>As a matter of fact, that has been both of my parents' experience. Lots of other students that I have talked to, tell me that the worst classes that they have frequently taken, end up being taught by those faculty that only care about their research. It gets a lot worse when the faculty ends up stuck teaching a subject that has nothing to do with whatever they are publishing on or researching at the moment. Schools reward their faculty members not for being "good classroom teachers". Everyone knows, that the prospects for tenure or advancement to full Professor status suffers if the individual does not publish enough.</p>
<p>Alexander Austin, a very respected from UCLA and head of that university's HIgher Research Institute, has noted that "there is actually an inverse relationship between how much contacts students have with faculty and how much professors publish!!" and that in fact famous professors may not teach at all leaving the the work to graduate students.</p>
<p>For graduate school, it would be a different story.</p>
<p>
[quote]
A school could benefit from underreporting its selectivity numbers and enjoy a nice boost in its overperformance.
[/quote]
</p>
<p>That's a very risky enterprise. People use the test score and high school rank/GPA figures independently of the USNews ranking issue. Underreporting them may help with "expected grad rate" but will hurt an institution in almost every other way that academic metrics are considered by the public, by donors, by prospective faculty, by recruiters, by prospective students--including by USNews itself elsewhere in the formula. Also, because one does not know what formula USNews is going to use when one submits numbers, it's a gamble that it will help you with the grad performance metric. They may not use it at all, or weight it less than you figured.</p>
<p>Schools are hurt by the ceiling effect, but trying to get around it by underreporting their students' academic credentials would be a very hard sell.</p>
<p>FWIW, the UCLA guru's last name is Astin, not Austin, and he was head of the Higher Education Research Institute.</p>
<p>Hoedown, it is indeed risky unless ... the scores are compiled using different sources for different purposes. </p>
<p>Don't schools with optional SAT/ACT policies have broad latitude to include/exclude scores until they fit the results sought for that year? For instance, didn't Middlebury find a way to report an impossible 75 percentile SAT score that was not a multiple of 10. </p>
<p>Watching how US News did not hesitate to Harvey Mudd DEAD LAST in the under/over category must have opened the eyes of a few experts at maximizing the possible exploiting of the USNews model.</p>
<ol>
<li><p>Peer Assessment Score: I agree with using this as a factor, though I don't think it should be weighted so heavily. by weighting it less heavily, it can still help (or hurt) a college's ranking since some of it is valuable, but it isn't weighted heavily enough to allow the bias, etc. to change a rank much.</p></li>
<li><p>Graduation & Retention rank: Just like the above, while I think these are important to a degree, I don't think they should make up 20% of the ranking.</p></li>
<li><p>Graduation Rate Performance: This should be completely eliminated. It adds so very little to the ranking, when other things could be factored in. And as Gerhard Casper put it, some colleges are penalized for adding "too much value"; the college's curriculum is difficult to the point that fewer graduate than was "expected." Caltech was the example, I think.</p></li>
<li><p>Faculty Resources rank: I agree with the weighting on this, but I think the breakdown should be modified. Faculty salary should be eliminated. The % classes should include all class sizes. The average class size should also be included. Other things should be factored in too somehow, such as class subsections, etc.</p></li>
<li><p>Student Selectivity (15% of total score): The overall weighting is good, though the breakdown isn't. GPA should be factored in and weighted more heavily than the SAT (studies show that GPA is more indicative of one's success in college than SAT, I think). Acceptance rate should not be included; it shows nothing. Perhaps other things could be included, too (such as what factors colleges consider and how important they are).</p></li>
<li><p>Financial Resources rank: I think this should be weighted less heavily.</p></li>
<li><p>Alumni Giving Rate: This shouldn't even be in there. It's obviously biased toward private schools.</p></li>
</ol>
<p>As an added note, I think other factors should be considered -- library holdings (breadth and depth), courses offered (breadth and depth), majors offered, opportunities (extracurricular activities, research opportunities, etc.), housing, resources (computers, help centers, cost and financial aid info (average grant from university, average debt at graduation, average loan, etc.), cost of living in the area, and more.</p>
<p>You submit different scores different places at your peril. It is embarrassing to have different numbers reported different places without valid reason. I assume, too, that USNews does some spot checking and may notice scores that are out of line with, say, the CDS. Furthermore, you don't know what source someone else might elect to use. The USNews numbers are pretty widely available, so if you submit artificially low ones there, who know how many people may use them to draw conclusions about your student body.</p>
<p>
[quote]
didn't Middlebury find a way to report an impossible 75 percentile SAT score that was not a multiple of 10.
[/quote]
The statistical formula for finding a percentile with interval data could certainly yield a number that was not a multiple of ten. Same with imputation--which they may have done for nonreporting nonreporting students. I don't know how they figured it, but having a number that wasn't a multiple of 10 wouldn't necessarily raise a red flag.</p>
<p>I understand what you are saying about the potential, but I think this particular concern is unwarranted. I'd be more concerned about schools finding "justified" ways to report inflated academic metrics (across the board, to everyone not just USNews), not deflated ones.</p>
<ol>
<li><p>Peer Assessment Score. Important. This likely correlates with how the college is perceived nationally/locally - important for employment options.</p></li>
<li><p>Graduation & Retention rank. Useful - but highly overweighted.</p></li>
<li><p>Graduation Rate Performance. Useful - but highly overweighted.</p></li>
<li><p>Faculty Resources rank.
a. % of classes with 50 or more students. Very important.
b. % of classes with 20 or fewer students. Less important.
c. Student/faculty ratio. Not important.
d. % of faculty that are full-time. Not important.
e. Faculty Salary. Not important, highly overweighted.
f. % of Faculty with terminal degree in their fields. Medium importance.</p></li>
<li><p>Student Selectivity.
a. Standardized Test scores. Extremely important. Most important thing on this entire list.
b. % of students who are Top 10%. Not important.
c. Acceptance Rate. Not important.</p></li>
<li><p>Financial Resources rank. Medium importance.</p></li>
<li><p>Alumni Giving Rate. Not important.</p></li>
</ol>
<p>In conclusion, a lot of dead weight. The one thing that should have huge weight (standardized test scores) doesn't.</p>