The Wisdom of US News Peer Assessment Rating

<p>I can only imagine what the U Michigan posse would do then…:)</p>

<p>

Somebody posted an article about this the other day.</p>

<p>[News:</a> ‘Manipulating,’ Er, Influencing ‘U.S. News’ - Inside Higher Ed](<a href=“http://www.insidehighered.com/news/2009/06/03/rankings]News:”>http://www.insidehighered.com/news/2009/06/03/rankings)

</p>

<p>Overall I find this analysis to be good. It shows that there is high correlation between PA and more concrete factors such as graduation rate and SAT scores. To me, this shows that prestigious schools have smart students (SAT scores) and are able to have those students graduate. Isn’t that a mark of a school worth going to? If a college has great name recognition, smart students, high graduation rate, stong faculty, good research opportunities, etc. doesn’t that make the college worth the big bucks people spend to attend? This model shows that the PA numbers don’t come from nowhere and actually have some correlation to how strong a school is and how strong the students are.</p>

<p>

Are these the partial slopes for the multiple regression? My question is why the yield has a negative slope. Shouldn’t a school with a higher yield (Harvard) have a higher ranking? Did you check for colinierity? There is probably rampent colinierity in this model which could be hurting the results.</p>

<p>Also, what program are you using to run your model? R? SAS? JMP?</p>

<p>Venkat89-
Yes there is multicolinearity in this model but that does not detract from the predictive power of the model. However, the regression weights for the individual predictors can be deceiving, as with “yield”. Multicolinearity is not a bad thing when the goal is to explain and predict with an overall model. I posted the individual correlations so you can see that yield has a positive correlation with PA.</p>

<p>I used SAS.</p>

<p>

</p>

<p>MWFN, why would the elimination of ED have a negative impact on USNews’ selectivity? </p>

<p>The elimination of ED should technically increase the number of applications. The school does not forego all its admission crutches as the answer to losing the high yielding ED students is to simply compensate by using the waiting list. Since there has been numerous studies (check Avery) purporting that the ED pool has slighly less competitive statistics, there is no reason to believe that the RD admitted pool will present a drop in number of top 10% and there is no reason to believe that the SAT from admitted students would be lower. </p>

<p>All in all, while dropping the ED admissions should (in theory) cause a slight drop in yield, it should really result in a higher selectivity with increased applications, constant admits, and higher stats from ADMITTED students. </p>

<p>And, as we know, yield has absolutely NOTHING to do with USNews statistics. Unless it is buried in the PA model. ;)</p>

<p>PS And, fwiw, increasing one’s selectivity (admissions) might cause diminishing returns. The price of a higher selectivity is a higher graduation expectation, and the penalties in the US model are a lot higher than the benefits of a better selectivity rank. Again, look at the impact of those two elements on schools such as Caltech and Harvey Mudd.</p>

<p>

</p>

<p>Could your theory explain the changes at the University of Chicago and their impact on the USNews rankings? Did Chicago play games?</p>

<p>

</p>

<p>Many of the most common forms of statistical “manipulation” are cited in the article for which lockn provided the link (post #183^^^^). Most of these don’t involve actual lying. US News rankings are driving schools to alter their behaviors in ways that do not necessarily improve educational quality but do push in the direction of stronger US News statistics. </p>

<p>One easily manipulated factor is class size. As the article documents, Clemson imposed 19-student caps on a bunch of already relatively small (20-25 person) classes to boost its reported percentage of “small” classes as arbitrarily defined by US News (under 20 students). Does this benefit educational quality? Probably not. The qualitative difference between a 19-person class and a 22-person class is trivial. In fact, setting a cap of 19 without increasing the number of such classes offered would tend to impair educational opportunities by making it more difficult for students to get into these smaller classes, perhaps leaving them to scrounge for more large lecture classes—precisely the opposite effect from the one we’d want to encourage. At the other end of the scale, U.S. News defines a “large” class as one with 50 or more students. But that means as far as U.S. News is concerned a 50-person class is just as much a negative as a 100-person class or a 1,000-person class. This has led some schools, including Clemson, to combine what once were multiple-section introductory classes of perhaps 50-75 students into a single mega-section of hundreds of students, because paradoxically by jamming hundreds of students into a single enormous lecture they can dramatically reduce their reported percentage of “large” (50+) classes. Once again, the U.S. News rankings are driving perverse changes.</p>

<p>Other easily manipulated factors include “faculty resources” (jack up tuition and boost faculty salaries; if you retain the very same faculty and boost their salaries 20%, U.S. News will say you’re doing a better job) and “financial resources,” i.e., spending per student (jack up tuition and rebate the increased revenue right back to the students in the form of increased financial aid to offset the tuition increase; your net revenue will remain constant but you’ll show higher spending per student, consequently your U.S. News score will go up). Then, of course, there’s the widespread practice of going SAT-optional on the theory that only the applicants with the strongest scores will submit their SATs and you can boost your reported SAT 25th-75th percentile numbers without actually making any change in the composition of the entering class. Another relatively way to boost your 25th-75th percentile SAT scores is to reduce the size of the entering freshman class to make it more selective in both GPA and SAT scores (and on admit rate, for that matter), while filling those empty seats with (possibly less qualified) transfer students whose stats escape the US News statistical radar, focused exclusively on the stats of the freshman class.</p>

<p>Where Clemson apparently lied outright (according to the article) was in intentionally lowballing its competitors in the PA survey. This is disturbing. I don’t know how widespread this practice is. If it’s just a few schools, the effect would be pretty trivial. If it’s a widespread practice, the lowballed scores should largely cancel each other out, though it would tend to push overall PA scores down a bit. But there’s no particular reason to think it would work systematically against any particular school or group of schools, consequently the distorting effects should be minimal.</p>

<p>Bob Morse, US News’ rankings guru, was moved to offer a feeble response to Clemson’s reported statistical manipulations today:</p>

<p>[Clemson</a> and the College Rankings - Morse Code: Inside the College Rankings (usnews.com)](<a href=“http://www.usnews.com/blogs/college-rankings-blog/2009/06/04/clemson-and-the-college-rankings.html#read_more]Clemson”>http://www.usnews.com/blogs/college-rankings-blog/2009/06/04/clemson-and-the-college-rankings.html#read_more)</p>

<p>

</p>

<p>Would it not be *extremely *simple for both Clemson AND Robert Morse to show good faith and integrity? All it would take is to make the surveys completed by Clemson in the last 5 years … public. </p>

<p>Should we hold our bated breath for a modicum of transparency?</p>

<p>^ Shall we also make xiggi’s voting record for the past 5 elections … public?</p>

<p>Making voting records public would likely reduce participation in future surveys. But perhaps this is the ultimate goal of xiggi’s suggestion.</p>

<p>Xiggi, you don’t get it.</p>

<p>Or maybe I don’t get it. ;)</p>

<p>If the surveys become public, it’s the end of the surveys. :)</p>

<p>Hmmm… Maybe everything else should be made public. I think everyone should be filmed when taking the SAT. </p>

<p>In fact, I think we should document everything a high student does so we can see if they lie on their apps.</p>

<p>And all admission committees, when choosing among students, should have their proceedings broadcast live on CSPAN2.</p>

<p>UCBChem…;)</p>

<p>quote from Morse in the article bclintonk posted:

</p>

<p>^^–^^</p>

<p>Out of curiosity, did you think I doctored Morse’s quotation in my prior post?</p>

<p>

</p>

<p>Don’t you think that insisting on keeping everything hidden from the public eyes will not hasten the demise of the surveys? </p>

<p>Does the fact that more and more colleges and universities have resisted to publish their CDS help or hurt the students and their families? Are the schools that still prefer to hide their reports look better or … worse? </p>

<p>And, last but not least, if there is nothing to hide, why would the disclosures hurt the process? After all, shouldn’t a reputational survey want to maintain a certain … reputation?</p>

<p>PS And, unfortunately, DS, I am afraid you are completely correct when stating. “Or maybe I don’t get it.” Or,perhaps this is still part of your comical routine!</p>

<p>^ Hah, I didn’t even notice your quote…I just read your comment.</p>

<p>bc,
You’re speculating. Even on what has gone at Clemson. Look at the facts for something like class sizes, which you reference above, and you reach a different conclusion.</p>

<p>To judge Clemson, I looked at the detail in their Common Data Sets and their Instructional Class Size sections from two different years. There is a breakout for the # of classes over 50. There is ALSO a breakout for the # of classes over 100. </p>

<p>To check on Ms. Watt’s claims and to see if your speculations ring true, I dug around and discovered the 2002-03 CDS and the section on Instructional Class Size. I compared that to the most recent edition of 2008-09. If there was great and easy “manipulation” going on, it should be simple to spot in a comparison of data that is six years apart. Right? </p>

<p>Here is the data on Clemson on their class sizes for Fall, 2008 and Fall, 2002.</p>

<p>Fall 2008
Total Classes: 2407
Classes with 50+ students: 275 (11.43%)
Classes with 100+ students: 71 (2.95%)</p>

<p>Fall 2002
Total Classes: 1936
Classes with 50+ students: 196 (10.12%)
Classes with 100+ students: 48 (2.48%)</p>

<p>So, are you labeling this increase from 2.48% to 2.95% “manipulation?”</p>

<p>BTW, your U Michigan compares worse on this than Clemson. U Michigan added classes with 100+ students at a rate faster than Clemson. </p>

<p>Fall 2008
Total Classes: 3124
Classes with 50+ students: 556 (17.80%)
Classes with 100+ students: 240 (7.68%)</p>

<p>Fall 2002
Total Classes: 3029
Classes with 50+ students: 492 (16.24%)
Classes with 100+ students: 204 (6.73%)</p>

<p>

Have you ever done a “peer review” at work? Most of these surveys are anonymous because you’ll likely get more honest feedback.</p>

<p>UCB, when honesty is directly questioned, that cat is out the bag. This is no longer about using the cloak of anonymity to answering questions in a survey, it has everything to do with demonstrating that the process was done with the required integrity. It is also USNews burden to SHOW how they entered the data of Clemson and if, in fact, their “safeguards” threw out the votes of Clemson. The silence is nothing more than an admission of guilt! </p>

<p>And, look at the direction of the discussion at Clemson. The school issued a defensive statement …</p>

<p>

</p>

<p>So, here you have it. Quis custodiet ipsos custodes?</p>

<p>Hawkette, but what about % of classes below 20 students - which would improve Clemson’s ranking?</p>

<p>Xiggi, no joking.</p>

<p>Do you really think you will get more accurate results if the surveys were public?</p>

<p>If I was talking to a friend, and I said to my friend, John is a sleazeball and John walked in the room, would I call John a sleazeball?</p>

<p>Ok. maybe I would. ;)</p>

<p>But I don’t think the college presidents would answer the surveys more honestly if they were made public.</p>

<p>

</p>

<p>Then I don’t think that the Sarbanes-Oxley Act of 2002 made any difference in more honest diclosures neither. </p>

<p>Fwiw, why would any official who is considered “expert” enough to render an opinion about another academic institution be worried about the honesty and integrity of his opinion? Isn’t the entire academic world built on integrity and … disclosures? Isn’t having the courage of one’s opinion important among deans, provosts, and presidents of our bastions of education?</p>

<p>Perhaps, the public disclosure will simply demonstrate what many have said about the PA. The respondents often have enough trouble evaluating their OWN institution to have issues offering an educated opinion of their peers, let alone peers they have never visited, and perhaps only read about in last year’s survey. </p>

<p>And, if the public scrutiny would force the “experts” to stand behind their opinion or force them to admit “I do not know enough” this would be a giant victory for the public. A giant one!</p>