Clemson--Gaming US News Ranking

<br>

<br>

<p>Good to hear those scores have finally reached the top of the third decile. In contrast, William & Mary’s accepted-student SAT scores are in the first decile.</p>

<p>[Xap</a> Student Center](<a href=“http://www.xap.com%5DXap”>http://www.xap.com)</p>

<p>A couple of years ago I was doing some research into start-up costs at various schools around the country, and it got to be funny how many of them stated aspirations to be a “top 20” this or “Top 15” that. Usually in research funding. It’s nothing new to aspire to rise in certain rankings. It’s just that people are particularly troubled by the USNews rankings, because (a) they don’t want them legitimized in that way and (b) some of the actions taken may not truly make the school “better” in the way the rankings would suggest.</p>

<p>The other question that came to mind is how many are doomed to failure when everyone aiming for a top spot on the same list. There can only be 20 “Top 20” schools, and 20 of them are already sitting there. It’s not feasible for 80 others to think they’re going to be there in the near future.</p>

<p>hoedown,
Actually there can be a lot of “Top 20” schools if one narrows down their rankings to the specific elements that make up the rankings. </p>

<p>For example, a school like Wake Forest could make a lot of noise that it’s a Top 20 school for students who are interested in being on a campus with small classes. </p>

<p>Or a bigger research university like a State U could say that it is Top 20 in research funding. </p>

<p>Or another school (Tufts) could say that it is Top 20 in standardized test scores. </p>

<p>Or a school could even say it is Top 20 in terms of the combined percentage of students of color on their campus (UCSD).</p>

<p>Schools can and should get more creative about presenting themselves to prospective students and families and not be slaves to the overall USNWR rankings. They can best do this by comparing themselves on things that they do well and which are of value to their target market. Forget the broad rankings and focus on what your target student cares about and then do everything possible to deliver that element to that student.</p>

<p>The article just reinforces my posted views of what a joke peer assessments are as a factor in USNWR rankings. </p>

<p>My takeaway is that Clemson is doing what a lot of colleges are doing, they just had someone be brutally honest about it. If Clemson is attracting better students over the last several years, more power to them. Still, it would be interesting if someone were to request peer assessment records from Clemson and other public universities under disclosure laws and report the results. Now, THAT would be interesting.</p>

<p>A few of the stats that are being discussed here could fairly be described as “gaming,” but if Clemson is truly seeking to enhance all the criteria upon which rankings are based - e.g. student selectivity, faculty-student ratios, per-student expenditures, etc. - they are, in fact, becoming a better university in the process.</p>

<p>Is there evidence to support high selectivity, low faculty-student ratios, increased per-student expenditures actually improve the quality undergraduate student education? Just because USNews says these objective indicators of a quality education does not provide validity to dubious data. </p>

<p>What worse is … “…Clemson is went from 38th best college in the country to the 22nd best without changing the faculty or the curriculum.”</p>

<p><a href=“http://blogs.indystar.com/education/2009/06/do_college_rank.html[/url]”>http://blogs.indystar.com/education/2009/06/do_college_rank.html&lt;/a&gt;&lt;/p&gt;

<p>

</p>

<p>Expression of honesty about the abuses of the Peer Assessment are not novel. Countless school officials have admitted to use the PA to punish their foes and help their friends. While geographical cronyism is blatant, it is simply buried under the rug of “business as usual.”</p>

<p>It will take many more admissions of manipulation to convince the PA defenders that there is something rotten in a process that is as opaque as it lacks integrity.</p>

<p>

</p>

<p>I don’t know about class size, but I know some institutions do fudge numbers. I was told this, to my face, by a colleague at an institution that I respect and admire. And if it’s happening there, well, I guess I don’t know what to believe any more. In their case, it was partly a lack of data (it wasn’t collected in a way that fit the USNews parameters) and in crafting a number that would meet USNews requirements, they came up with something that is probably more flattering than the truth. In any event, it’s not comparable to other campuses–despite the fact that this is exactly what USNews does with it.</p>

<p>I have to say that I am now more careful not to assume small differences in USNews-reported numbers reflect true differences between institutions.</p>

<p>^^ The most damning part of the Clemson allegations in my opinion is not what it says about PA, but what it says about the utter bankruptcy of the so-called “objective” data US News uses in its rankings. Most of these data are highly manipulable, and most of the methods schools can use to improve their US News ranking do not require outright lying. Schools can and do make cosmetic changes that reflect no real improvement in educational quality but show them in a more attractive light in the US News rankings. The whole enterprise is pretty much a sham.</p>

<p>This is why I’ve always felt these rankings are a bunch of garbage. I laugh when I read that schools like Samford, Arkansas, Howard, TCU, Catholic, Washington State and Colorado State are ranked and we’re not.</p>

<p>

</p>

<p>I think you’re missing the point. Schools can increase their selectivity scores as measured by US News without actually increasing their selectivity. Really increasing your selectivity is hard. Improving your selectivity score is easier. Two obvious methods: 1) Go SAT-optional. Only the top SAT-scorers will report their scores. The school’s reported 25th-75th percentile SAT scores will go up, even if they’re admitting exactly the same applicants. 2) Reduce the size of the entering freshman class so as to actually increase selectivity (and derivatively, reported selectivity stats) for the entering freshman class, and make up for the lost revenue by filling the empty chairs with transfer students whose SAT scores and other stats will go unreported on US News (because it measures only enrolled freshman stats). Reported selectivity will rise; the actual selectivity of the freshman class will rise; but the selectivity of the school as a whole—and the strength of its student body—will remain constant or possibly even decline a little. It’s a “costless” move in the sense that it’s revenue-neutral. It will make the school look better in US News. But it won’t make it a better school. </p>

<p>Same for spending-per-student. If a school hikes tuition by 20% and gives the faculty a 20% raise, is that a better school? It could be, if they use the enhanced salaries to attract and retain a stronger faculty. But if the faculty remains the same, is it really a better school just because it spends more? If a school hikes tuition 20% over 2 years and recycles all that additional revenue back into increased financial aid to offset the tuition increase (another revenue-neutral move), the increased financial aid will boost reported “spending-per-student.” But the school will have exactly as much net tuition revenue, the net cost to students will be exactly the same, and no additional resources will be created to go into improving or upgrading educational or other services for students. Yet through the magic of an accounting gimmick, the school will look like it’s spending more per student. Is that a better school? I don’t think so.</p>

<p>It’s all a sham, and the Clemson situation has blown the lid off.</p>

<p>I totally agree bclintonk. I have to laugh at people who think this incident is just an indictment on the PA at USNWR.</p>

<p>FindAPlace and Hoedown, one of the statistical quirks re the over-50 % of classes isn’t something the colleges fudge one way or another…it’s just an easily misunderstood aspect of stats. The crux of the issue is that people are likely to think that if x% of classes have over 50 students in them, then x% of the average student’s courses will have over 50 students. But since the over-50 classes have more students IN them, they account for a BIGGER % of the all the students’ experiences than x%. Get it? Likewise, if y% of the students classes are under 20, they will have a SMALLER impact on the overall students’ experiences, so–necessarily–the average student’s % of classes with under 20 students in them will be FEWER than y%. In other words, x and y refer to the % of CLASSES. If a school offered just two classes, one with 10 students and one with 90 students, it could say that 50% of its CLASSES had under 20 students. But in fact only 10% of the STUDENTS (NOT 50%) would have had a class with under 20 students.</p>

<p>I know, Schmaltz, it’s an important fact and one that several people have carefully explained on CC. </p>

<p>But it’s also the case that the class-size figures into the numerical formula USNews uses to determine a school’s “ultimate” ranking. Schools do have an incentive to have good numbers in this area, even if the misperceptions make those percentages hard for people to interpret accurately.</p>

<p>

</p>

<p>Thanks for bringing these up as these may occur to people that study these things but for the rest of us, valuable info to consider. </p>

<p>

</p>

<p>So you’re saying that the current system of reporting class sizes is unweighted (i.e., that 90 kids took the large class is not factored in)? That’s outrageous.</p>

<p>Yes, unweighted…that’s why I brought it up even though it’s been discussed before on collegeconfidential. It needs to be repeated because (1) it’s a little hard to grasp, and (2) a new group of people (high school seniors) needs to be aware of this every year.</p>

<p>Another misleading aspect of SATs that needs to be repeated periodically is that the SAT stats that are given are usually of those ADMITTED, not those who ENROLLED. Typically the admitted stats would be higher than the enrolled ones, because a lot of the people who were at the top of the ADMITTED spectrum would also have been admitted to more competitive schools and would have actually enrolled at one of them.</p>

<p>Don’t know if anybody mentioned this, but isn’t also true that the % of “accepted” applicants stats usually don’t include waitlisted students who eventually get accepted?</p>

<p>And a year later, in the 2011 USNews rankings, Clemson has dropped to # 64 !!!</p>

<p>Clemson 2011
Peer assessment score (out of 5) 3.2
High School Counselor score (out of 5): 3.9
Undergraduate academic reputation index (100=highest) 69</p>

<p>@bclintonk</p>

<p>Thank you. When a university can do things to raise up the rankings chart without doing things that make it a better university, you know the rankings are crap.</p>

<p>Clemson 2011
Peer assessment score (out of 5) 3.2
High School Counselor score (out of 5): 3.9
Undergraduate academic reputation index (100=highest) 69</p>

<p>And, by the way, that is up from 3.1 last year. It seems that even bad publicity is better than no publicity!</p>