<p>^ LOL! Xiggi, compliance with SOX is required by law…and the info is not opinion-based. </p>
<p>Answering USNWR’s PA is strictly voluntary.</p>
<p>If academics’ PA survey results are going to be open to the public and subject to ridicule and questioning, then academics will find it’s not worth the time to complete the survey…if they’re going to have their opinion publicly scrutinized.</p>
<p>ucb,
Good point on the < 20 class size data for Clemson and how that might get changed by moving more from the 20-30 range into the < 20 range. </p>
<p>In fairness to Clemson, the 20-30 range was the largest group of classes that they had in 2002 and so was ripe for moving large numbers into the < 20 category. But I would agree that the degree to which this happened is eye-catching. I’m not that personally upset by this as I like smaller class sizes. I think that the student is a winner in such circumstances, but I can see how some might suspect Clemson of acting for other than purely student purposes. </p>
<p>Here are all of the sizes for Clemson</p>
<p>Fall 2008
Total Classes: 2407
Classes with < 20 students: 1185 (49.23%)
Classes with 20-30 students: 360 (14.96%)
Classes with 30-40 students: 406 (16.87%)
Classes with 40-50 students: 181 (7.52%)
Classes with 50+ students: 275 (11.43%)
Classes with 100+ students: 71 (2.95%)</p>
<p>Fall 2002
Total Classes: 1936
Classes with < 20 students: 428 (22.11%)
Classes with 20-30 students: 646 (33.37%)
Classes with 30-40 students: 464 (23.97%)
Classes with 40-50 students: 202 (10.43%)
Classes with 50+ students: 196 (10.12%)
Classes with 100+ students: 48 (2.48%)</p>
<p>But the two are related. If Princeton accepts 50% of its class ED, it only has to worry about accepting a high enough percentage of RD applicants to fill its remaining spots. However, to meet 100 % of spots, the number of accepted students has to go up because Princeton would have no way of knowing how many of those students have Princeton as a top choice.</p>
<p>^ The data generally confirm what was reported: Clemson appears to have shrunk a lot of classes enough (just enough?) to get them below US News’ cut-off for “small” classes (<20), and at the same time significantly increased the number of very large (100+) classes, going from 48 in this category in 2002 to 71 in 2008—an increase of nearly 50%. And we don’t know from this data how many of those new 100+ classes are 200+, or 300+. </p>
<p>I’m not “speculating” about anything, hawkette. I don’t claim any independent knowledge as to what Clemson did or didn’t do, nor do I particularly care what goes on there. I believe I was very careful to tie what I said about Clemson in particular to the published news reports about what a former Clemson employee publicly stated was going on there. </p>
<p>Nor did I attribute the other kinds of data manipulation I discussed to Clemson or any other specific school. These are just methods of “gaming” the US News rankings that are widely known and widely discussed in the industry, and widely believed to be occurring at some schools—though I have no specific knowledge about which ones are used at which schools. Nor do I have any particular beef against Clemson. You want to make this a Clemson-v.-Michigan pi**ing match. I don’t. I don’t give a hoot about Clemson, and it may well be that Michigan is doing similar kinds of things. Frankly, it wouldn’t surprise me. My own suspicion is that these practices are in fact widespread in the industry, and in my opinion they point to the utter bankruptcy of the “objective” data US News uses in its rankings, and therefore to the bankruptcy of the rankings themselves. </p>
<p>But the US News data are, in my opinion, worse than worthless—they’re downright pernicious. Because US News is now so widely used as a barometer of academic quality, its rankings have become a make-or-break factor for many applicants, donors, legislators—and increasingly even for faculty. That makes gaming the US News rankings an important consideration for many colleges and universities, and a dominant preoccupation for some. As the Clemson story (if accurate) suggests, these considerations are distorting even the most basic resource allocation decisions in higher education. I do not think we are getting a better system of higher education out of it.</p>
<p>Again, yield has no relevance to the current rankings of USNews. The selectivity index is based on standardized scores, percentage of top ten percenters, and admission rates. </p>
<p>Here’s is what happened at Princeton when they dropped their ED. Based on the numbers made available then, Princeton admitted 1791 out 18,942 students to the Class of 2011, including 597 via ED. The overall admit rate was 9.46%. One year later, applications jumped to 21,369 and they admitted 1976 students for an admit rate of 9.25%. </p>
<p>In the same period, Harvard went from 22,955 applications to 27,462. Its admission rate dropped from 9% to a bit above 7%. They admitted fewer students in April (1948) than in Dec/April of the prior year (2058.) Of course, as I mentioned earlier, the schools could rely on a more extensive use of the waiting list to round up their final class, and … preserving the lofty standards of their yield for bragging purposes. </p>
<p>However for USNews, the admission component of the ranking (1.5%) remained close to the original or even better. If Avery et al were correct in their research about ED, the other statistics should have improved. </p>
<p>From Harvard’s and probably from Princeton’s point of view, the only problem of dropping the early decisions is that they did not think about it before, as the stratagem simply increased their competitive advantage at the top of the food chain!</p>
<p>I remember several months ago, discussing in a similar forum (with some of the same participants here) why I felt that the PA was nothing more than a glorified subjective survey that was easily manipulated. My point was that all these directors, presidents and deans could do EXACTLY what Ms Watts reported Clemson does. </p>
<p>At the time, some people posted that “things like that were not likely to happen among these distinguished people”. And that they believed blindly on those scores.</p>
<p>I am smiling big time now…I just wanted to share that with all of you.</p>
<p>As a former PA defender, I have lost all confidence in the PA system and the USNews report. We should have never ranked schools in the first place :(</p>
<p>I’m pretty sure the top tier colleges secure their top spots becuz they talk to each other and assure each other top PA scores and prevent others from stealing their spot.</p>
<p>“schools that make big jumps in “objective” US News factors but not in PA are probably the schools that are most heavily invested in manipulating their statistics so as to increase their US News rankings….That Vanderbilt should be in this group doesn’t surprise me. USC, Wake Forest, Tufts, Georgetown, and Notre Dame are also heavily invested in this game.”</p>
<p>In # 205, you said:</p>
<p>“Nor did I attribute the other kinds of data manipulation I discussed to Clemson or any other specific school.”</p>
<p>Maybe I misunderstood your comments, but I read # 168 as a pretty unambiguous accusation of manipulation on the part of these schools. Perhaps not the specific actions that you describe, but your intent to discredit these schools and their statistical improvements is clear. </p>
<p>Re the balance of your comments, those of us who like the objective statistics provided by USNWR believe that these are useful in assessing the nature of the classroom in which a student will participate. For myself, I have long stated that I think that the undergraduate academic experience is most heavily shaped by four things:</p>
<ol>
<li> the quality of one’s student peers (Stronger students are preferred)</li>
<li> the size of the classroom in which you learn (smaller classes are better than larger ones)</li>
<li> the quality of the instruction that you receive (Teaching by professors is preferred over teaching by TAs)</li>
<li> the depth of the institution’s resources and their willingness to commit them to support undergraduate students (more money is better than less)</li>
</ol>
<p>Lastly, your choice of the word “pernicious” is one that I have also used on several occasions to describe aspects of USNWR’s ranking methodology. Our difference, of course, is that I think it is the PA aspect that is the villainous feature, one which pollutes and undermines their rankings far more than any other factor.</p>
<p>rjkofnovi: No, I posted somewhere else in the forum questioning whether the “objective data” used by USNews are even scientifically and rigorously proven to be an accurate measurement of quality at different undergraduate institutions…</p>
<p>USNews = Beauty contest that schools tweak for cosmetic purposes. I personally believe that schools have ton more administrative crap to worry about than Beauty contest like USNews…</p>
<p>You might be interested in this article on “predictive validity” from Wikipedia. It states that a correlation of .35 can be substantial. I obtained individual correlations with PA of .5 to .75 and a multiple correlations over .9. This is really incredible in the social sciences. PA is an extremely valid and solid indicator. It is amazing to find relationships this strong in social sciences.</p>
<p>In case there are any prospective students following this thread, the techniques I used are taught in courses in the areas of industrial engineering and operations research.</p>
<p>The problem, of course, is that—as the Clemson allegations reveal—we can’t have any confidence that US News’ “objective” statistical measures are accurate reflections of any of these laudable educational goals. Virtually all their metrics are subject to really rather egregious gaming, and schools have exploited these vulnerabilities. I can’t tell you which schools have used which particular methods. I do know that many of these methods are common knowledge in the industry, and there is a widespread belief, from time to time backed up by off-the-record confirmation from people well placed in admissions offices, dean’s offices, and central administration, that these tactics are being employed. </p>
<p>I use the term “pernicious” not to characterize the practice of any particular school, but to criticize the degree to which really quite fundamental college and university resource allocation decisions have been shaped, and in my opinion distorted, by efforts on the part of colleges and universities to make themselves look better as measured against the particular and in many cases rather arbitrary criteria measured by US News, which form the basis for its overall ranking. I think I have amply demonstrated the multiple ways in which this can happen. These are only some of the most widely known. Probably there are others that people cleverer than I, and employed in the pursuit of improving their school’s reputation, image, and (crucially in all this) US News ranking, are pursuing on the quiet. There tends to be a huge first-mover advantage in all this, since as soon as any form of statistical manipulation becomes generally known others will emulate it and it loses its punch.</p>
<p>To take your points one by one:</p>
<ol>
<li><p>I agree that stronger students are preferred. But the US News selectivity data don’t really tell us that. Not when you can engage in measures to boost your reported SAT 25th/75th percentiles without boosting student quality. First, notice that the 25th and 75th percentiles are the only figures that matter in the US News rankings: a student 10 points above the 75th percentile median is as good as a student 150 points above the 75th percentile median as far as US News is concerned, and a student 150 points below the 25th percentile median is as good as a student 10 points below the 25th percentile median. So smart schools will concentrate on what’s happening at the 25th and 75th percentiles, and micromanage admissions and financial aid decisions to nudge those figures upwards—even if it means losing some top-caliber applicants who will be hard to land and arguably could contribute more to the entering class, but are more cheaply replaced by students closer to the 75th percentile median and consequently contribute just as much to the US News ranking as students with SATs 140 points higher. Not when schools can target “merit” aid NOT toward the most highly qualified (as many applicants and parents naively expect), but toward the students who at the margins will bump the 25th and 75th percentile figures upward. Not when schools can boost the 25th-75th percentile medians just by going SAT-optional. Not when schools can boost their 25th-75th percentile entering class medians upward by reducing the size of the freshman class—the only ones who count for US News—and filling those empty chairs with transfer students, in many cases less well-qualified, who pay the same tuition but don’t drag down the school’s precious 25th-75th percentile medians. In short, marginal differences in reported SAT scores–a huge factor in the overall US News ranking—are pretty much meaningless. Sure, if the differences are in the 100 to 200 point range it probably reflects something real. Less than that, probably not so much. </p></li>
<li><p>Classroom size. I don’t need to comment on this much. The possibilities are amply demonstrated by the Clemson stories. Cap enrollment in smallish (20-30 student) classes at 19, so as to dramatically increase hour reported percentage of “small” classes; in Clemson’s case, it nearly doubled. Collapse multiple sections of popular classes into a smaller number of 100+ mega-classes, so as to reduce your reported number of “large” classes (50+, as US News measures it) by replacing classes in the 50-100 range with a smaller number of even larger classes—thereby reducing student scheduling flexibility and perversely (there, I said it) putting students in even larger classes while accruing credit on US News for your efforts. Hire more adjuncts and/or TAs to teach more sections of small (<29) classes to boost the percentage of small classes in the overall mix.</p></li>
<li><p>As for quality of instruction: unless I’m missing something, US News doesn’t seem to have any metric for this. In the first place, many professors are lousy teachers and many TAs (or GSIs) are good teachers, though that varies by institution, field, and individual instructor. But remember, tomorrow’s professors are today’s TAs/GSIs, so if we’re going to dismiss TAs/GSIs as a lot, what do we say about the professors who started their careers as TAs/GSIs? But granting that more teaching by professors is on the whole beneficial—just where is this reflected in US News rankings? I just don’t see it. A lot of schools—Clemson possibly among them, though I’m not sure—have concluded they’ll show better in the US News rankings by having more TAs doing more teaching of small classes (producing a higher percentage of small classes in the overall mix, something that is measured), instead of confining TAs to their traditional role of leading one-hour recitation or discussion sections of large lecture classes. Perversely then (there, I said it again), the US News ranking may actually push in the direction of more teaching by TAs, not less.</p></li>
<li><p>Money is good. No doubt. But as I’ve argued, there are easy ways to waste money or create phony paper increases in “spending-per-student” that have no bearing on educational quality. Many schools are tempted to boost faculty salaries because it’s a boon to their “faculty resources” and “spending-per-student” marks. Faculty love it, of course, but it may or may not pay off in enhanced recruitment and retention of high-quality faculty. It may just be a windfall to a bunch of unproductive deadwood. Either way, US News will credit you just the same. Schools are also tempted to intentionally raise tuition and offset the tuition hikes with enhanced financial aid, because collecting more and spending more means more “spending-per-student” in US News. High-tuition/high-FA schools will always look better in the US News rankings than low-tuition/low-FA schools, simply because the former have a higher throughput of cash and therefore show higher “spending-per-student,” even if the net cost to the student and the net tuition revenue to the school are the same under either the high-tuition or the low-tuition model. Perversely, then (there, I said it again), the US News metrics may actually be pushing schools in the direction of higher tuition and bloated costs, simply because they get rewarded for it in the metric that many applicants, some donors, and even some faculty now take as the “Bible” on college rankings. </p></li>
</ol>
<p>Is this any way to run our colleges and universities? I think not. It’s perverse. It’s pernicious. The rankings are not to be trusted because they’re highly manipulable, and in manipulating them colleges and universities are losing sight of their core mission and misallocating resources that could be better spent on real improvements in educational quality.</p>
<p>Oh, and by the way: the PA rating is the least of it. Lowballing your individual PA rating of your peers is likely to be ineffective because any individual survey will have little impact, and if the practice is widespread the results should cancel each other out. It’s only if there’s a widespread conspiracy against certain schools—a dubious prospect, and certainly something for which no real evidence has emerged—that lowballing of PA scores is likely to punish particular schools or groups of schools. The most egregious problems, and the greatest latitude for manipulation, are in US News’ so-called “objective” metrics.</p>
<p>“Our difference, of course, is that I think it is the PA aspect that is the villainous feature, one which pollutes and undermines their rankings far more than any other factor.”</p>
<p>Hawkette, I completely agree with your four points as being necessary to an excellent education, but what is missing is the <em>quality</em> and the <em>type</em> of teaching. Are the professors knowledgeable enough to provide incredible depth? Do they engage the students? Do they encourage independent thought? Are their students well-prepared after they leave the classroom? These are the intangibles that PA attempts to measure.</p>
<p>Where you and I differ (I think) is that I don’t believe quality education can be quantified. If an attempt is going to be made to do so, as is the case with USNWR, then a factor that measures the above intangibles, called PA in this case, should be used. PA does not pollute but rather adjusts.</p>