<p>Why should anyone care about this?</p>
<p>Oh, how I just love that question. If only because the search for Dulcinea is a saga that has been going on for a good ten years now - and the time certainly is ripe for the Annapolis group coalition to hitch their metaphorical wagon to admissions reform and get people to listen. If only because we are a society enamored of and often obsessed with "top ten" lists. If only because we get a good reason to discuss Nesse and what selectivity and prestige etc. really means in the context of quality education. If only because observing and trying to come to grips with the dynamics of collective action and admissions reform - in this case - carried out by a relative small but growing number of respected colleges - is reason enough.</p>
<p>The "rankings revolt" is now in the next stage - call it a watershed or turning point - but the letter is out and whether or not the Annapolis group et al. will sway enough support to make a dent in or actually change the world of college rankings this time around - that's the tale. Unless we have a viable substitute for USNews rankings and unless the number of dissenting colleges willing to pull out reaches critical mass a successful outcome is unlikely - that does not make it any the less fascinating or important.</p>
<p>
[quote]
Unless we have a viable substitute for USNews rankings and unless the number of dissenting colleges willing to pull out reaches critical mass a successful outcome is unlikely - that does not make it any the less fascinating or important.
[/quote]
</p>
<p>I really don't have an ax to grind here, but why are you making the assumption that "dissenting colleges [pulling] out" would be a "successful outcome"? Have you considered the distinct possibility that anything the college presidents come up with will be worse, by whatever measure you choose, than the US News rankings? </p>
<p>To use an analogy, would you rather have auto manufacturers rating cars or some organization like Consumer's Union?</p>
<p>Notably missing from the Annapolis group list referenced above is Smith College. Any idea why, mini and thedad?</p>
<p>Washdad, just as I would advise against reading too much into the meaning of rankings, I would certainly advise not to read too much into the meaning of "successful outcome" in the above quoted sentence which must be understood in context. By successful outcome, I simply refer to the specific goal of the dissenting colleges in terms of their collective action. For the organizing colleges leading the revolt, - and remember, the Annapolis group is not acting as a united coalition on this since each college must take a stand, get off the fence and sign up or not - success implies that they garner enough support for an effective boycott or, at the very least, launch a protest loud enough to seriously compromise the premise of benevolent collaboration that validates and legitimates the ranking system - not just in the eyes of the public but in the eyes of the educators who fill out the forms and take part in the survey. The subsequent impact of their aim - is quite another and different question. Other projects promise to serve the public need to know - Nessie and let's not forget we haven't heard from the Education Sector yet. Rankings and top ten lists (which I love) we have galore- now, I do agree that what we do need are the tricks of the trade to be able to tell if a car salesman is selling us a lemon.</p>
<p>
[quote]
Robert Morse may be one of the most powerful people you've never heard of.</p>
<p>Morse is the director of data research at U.S. News and World Report, the man behind the college rankings that appear every September like Elvis Presley walking onstage at the Vegas Hilton. The special issue has become ingrained in our national consciousness, its cover blaring out from the newsstand with glittering rhinestone phrases: "AMERICA'S BEST COLLEGES"; "ALL NEW EXCLUSIVE RANKINGS"; "#1 BEST SELLER." Add a few stars and some cartoon graphics of a faceless child (could be yours!) accepting a diploma amidst massive Ionian columns and what you have, my friend, is what has become known as the swimsuit issue of news magazines. What parent could resist?</p>
<p>As well known as the yearly issue is the controversy surrounding it. School administrators decry the rankings as a ridiculously inaccurate measure of an institution's quality. Some, like Leon Botstein, AB'67, president of Bard College, are moved to anger-fueled hyperbole. "It is the most successful journalistic scam I have seen in my entire adult lifetime," Botstein told the New York Times recently. "A catastrophic fraud. Corrupt, intellectually bankrupt and revolting."</p>
<p>Yet critics cannot deny its sway over prospective students and their parents. U.S. News sells 2.4 million copies of its college-rankings issue annually-driving newsstand sales up 40 percent-and 700,000 copies of a companion guide to schools. According to a 1997 UCLA study using data from more than 220,000 first-years, 41 percent of students find the rankings to be somewhat or very important in their college choice. </p>
<p>Not only is the special issue consistently one of U.S. News's best-sellers, but it sells to the top students. The UCLA study concluded that the U.S. News and other newsmagazine school rankings are "a phenomenon of high-socioeconomic status, high-achieving students" for whom a school's academic reputation is a powerful influence, "more powerful than the advice of professional advisors or the influence of families.".</p>
<p>Morse is the man primarily responsible for U.S. News's sway over millions of future professionals, academics, and politicians-he is referred to by his boss, special projects editor Peter Cary, as "the brains of the operation, the heart and soul of the engine." In 1989 Morse devised the magazine's first methodology to judge schools on such points as SAT scores, selectivity, and endowment income. He is still waiting to hear the end of it.
[/quote]
</p>
<p>
[quote]
The "rankings revolt" is now in the next stage - call it a watershed or turning point - but the letter is out and whether or not the Annapolis group et al. will sway enough support to make a dent in or actually change the world of college rankings this time around - that's the tale.
[/quote]
</p>
<p>Asteriskea, have you seen the letter? And if you did, could you let us know what the letter is reuesting, or more importantly is ... proposing? </p>
<p>While I believe that the motives of the officials of schools such as Drew and SLC are crystal clear, I am wondering about how the group might reconcile the various positions. For instance, it is a public secret that the Claremont schools do NOT like the rankings anymore than many of their peers. This said, changes in the manner USNews 'handicaps' schools through highly questionable and subjective weights (reference to the cronyist peer assessment) should have little negative impact, if any. On the other hand, I don't see how schools that have seen their ranking boosted by the shenanigans in the PA and the 'generous' elimination of a lower selectivity through a lower expected graduation would support comprehensive changes.</p>
<p>Xiggi says that US News uses "high questionable and subjective weights." I've read this dozens of times (at least) on CC, and I don't even disagree. A lot of posters go on to write as if there would be a renaissance of open information sharing if the colleges just started their own system (an argument not made by Xiggi in the above post, by the way). Once again, my question is what in the history of education makes us think that a college information system for prospective students would be less questionable and less subjective? I think there has been a LOT of assuming facts not in evidence on this topic.</p>
<p>Washdad, having repeated --and posted-- my main criticism of the USNews so many times, I tend to take a few shortcuts. To clarify my position, the "highly questionable and subjective" is a reference to the first column of the ranking: the peer assessment. In so many words, I believe that the 25% weight given to the PA is ridiculous considering the openly recognized manipulation and cronyism involved with the process. And this leads me to the next point: can we trust the schools to provide the information correctly and spontaneously and offer COMPLETE transparency? My answer is a resounding ... NO. Hell no! </p>
<p>For the record, it is this same belief that forces me to look (down) at Lloyd Tacker with such dismay. I find his naive insistence on seeking the 'solutions' by merely polling the people who love to operate behind the dark curtains of their ivory towers to be an exercise in futility. If this group of educators TRULY wanted to change how they are perceived by the public, a good start would be to make the famous Thacker meetings open to the public. </p>
<p>Lastly, since the MOST important section of the article quoted previously was removed, I am reposting it:
[quote]
U.S. News makes no bones about viewing education as a product and students as customers-a distasteful metaphor to administrators, but a metaphor not without merit. "The schools won't accept the premise that we're providing a service that the marketplace believes has value," says Morse, "and that our main market for doing this is the consumer."
[/quote]
</p>
<p>All I can say, the time that colleges start to consider the students as CUSTOMERS is well, well overdue. Indeed, a distasteful metaphor to administrators!</p>
<p>
[quote]
All I can say, the time that colleges start to consider the students as CUSTOMERS is well, well overdue. Indeed, a distasteful metaphor to administrators!
[/quote]
</p>
<p>Amen and amen.</p>
<p>
[quote]
If the prestige privates really wanted some accountabilty, they could publish their COFHE data (that would be a real eye-opener - H. at 27th out of 31 in perceived academic quality and quality of campus life would take some getting used to) or take part in NSEE. </p>
<p>Fat chance of that!
[/quote]
Sure, that makes sense. Publish a ranking where nobody knows any other school. Get real.</p>
<p>I have a modest suggestion for a replacement system that would meet most of the objections to USNews--more of a thought experiment than anything that is likely to actually happen: First, use the new Carnegie Classification scheme, which would divide the top schools among several categories instead of concentrating "the schools that matter" into two. Then, simply list schools alphabetically within these classifications, as originaloog suggested, providing as much data from the CDS and NCES as anyone could ever want, far more than USNews currently includes in their methodology. (And of course USNews could make the data sortable on any of the measures, just as they do now in the on-line version of the rankings.)</p>
<p>Would something like this satisfy those who are critical of this little revolt? Or is it really the ranking numbers themselves and the ability to see a numbered, 1-100, hierarchy of implied educational quality that you value and want to preserve? </p>
<p>That seems to me to be one of the root questions about USNews rankings in their present form, and it somehow gets lost in the shuffle in talk about depriving students/parents information, alternative outcome-based measures, whether or not the PA is any more subjective than other data points, etc.</p>
<p>College students as customers -this is the whole point now, isn't it? Most colleges do indeed think of students and parents as customers and higher ed as a marketplace. College marketing is big business - and that includes promoting colleges and universities, developing brand loyalties and brand recognition. Savvy customers ask questions about what kind of service is being delivered and how - to assess the quality and value of service. USN rankings provide a service too, and savvy customers have not just the right but the obligation to question the product. These rankings provide a great and much needed service. These days, customers do want more quality and value from colleges and from ranking services that profit from the admissions frenzy - and that certainly does mean getting real about all of those services aimed to put parents and students more in control of the college selection process. The question really is then, if the customer is always right. The question is also just how many elite colleges with strong, brand recognition, and loyalty linked to academic reputations will sign up to opt out.</p>
<p>
[quote]
Colleges and universities are engaged in a competition for their share of the education market; competing for students not only in terms of academic programs, prestige, and reputation, but also on the quality of student service delivery and value of student experiences outside of the classroom....
[/quote]
</p>
<p>
[quote]
The question really is then, if the customer is always right.
[/quote]
</p>
<p>Is the provider more likely to be right than a lot of customers who have had the opportunity to shop around?</p>
<p>To me the problem is that "prestige hounds" need a ranking list so they can say they're at "the #1 school". There is a huge market for this product (a list saying WHO"S THE BEST) and I expect there always will be. There are other ways to evaluate schools--read the Fiske, Insiders, Princeton, etc. guides, talk to people, visit some campuses. The best I can see happening is augmenting the US News kind of rankings with other "top 10" (or 100--or 300) lists addressing things like student engagement, success in grad school/med school admissions, etc. Some of this data is out there, but I get the feeling Thacker is just trying to say something like, "US News rankings are not the only way to measure excellence." I think he'll have achieved a lot if he can educate the general public, as Loren Pope has had some success doing, that a great education can be had in thousands of places, that prestige is just that and doesn't, by itself, ensure a great education and that often, a better fit and a better learning environment for a given student can be found outside of the US News top 10, 25 or (gasp) 100 schools.</p>
<p>I have serious doubts that the surveys used by the COFHE schools and the new NSSE surveys would be even slightly useful as a ranking system across a range of colleges. These surveys are best suited for longitudinal surveys over time at a school. As an analogy, presidential tracking polls over time are somewhat informative, but any one poll is often a flawed predictor of future election results.</p>
<p>Here's the problem. I was looking at the survey results for a school whose students gave it a 2.5 (out of a possible 4) for "racial/ethnic diversity", a well above average score. So, this school must be doing pretty well in the area of racial/ethnic diversity, right? </p>
<p>Uh. Not exactly. The school is as white as the driven snow. 81% white. Only 12% US minority students. There have been KKK meetings with more that that.</p>
<p>Similarly, the score for "Housing" was above average, despite the fact that the school provides housing for less than half of its third and fourth year students. How can you legitimately be above average when you don't even provide housing for your students?</p>
<p>The qualitative responses on many of these survey questions are highly dependent on student expectations and their frame of reference.</p>
<p>The thought of outcome-based surveys like NSSE as the basis for rankings could turn even me into a USNews rankings lover. It's frightening for more reasons than just the methodological ones that interesteddad rightly points out. (And there would enough of these that the arguments over USNews's methodology would look mild.) Even worse, it seems to me:</p>
<p>There would be pressure to adopt homogeneous curricular practices (like lots of writing-intensive classes rather than genuine research that might produce a less tangible "product") whether or not such courses fit the curricular needs of a particular department or institution. </p>
<pre><code>Institutional priorities could shift to fit the measurement instrument (NSSE includes such measures as whether a college provides dependent care for students, for example; it also asks about community-based work in the context of a course, but not as a a volunteer project et al.)
And if colleges are now guilty of manipulating such things as admissions practices in an attempt to move up in the rankings, imagine if they began to make decisions about (even more) of their educational practices based on some artificial ranking.
</code></pre>
<p>It's basically ALL artificial.</p>
<p>All things considered, I think USNEWS does a pretty decent job. Sure, I quibble with some of their weights, but as a rough guide, I find their rankings useful. I couldn't care less if a school is #17 or #18, but it's handy for seeing where a school falls in the big picture. Very handy if you start using the sortable fields in the tables.</p>
<p>I understand why they have the peer assessment. They almost have to in order to prevent anomolies. For example, when I was tallying up PhD production, I would occasionally see some tiny, unknown Baptist Seminary at the top of the charts...because all of their grads were getting doctor of theology degrees. Likewise, if you look at per student endowment, you'll see a couple of miniscule conservatories near the top because they have huge endowments and 57 students. The peer assessment filter that kind of stuff out of a pure data-driven system.</p>
<p>Having said that, a few things I would like to see improved.</p>
<p>Faculty/student ratio: The USNEWS method is a joke. Universities get to count professors even if they are given full waivers to do no teaching and concentrate only on paid research. They are counted even if they only teach one undergrad course. Or if they teach a grad school course open to undergrads. I would propose totalling up the number of actual student courses (for example 500 students in an Econ course and 6 students in a Swahili course would be 506 student courses). Then divide by the number of professors (2) to give an an average of 253 students per professor course. That would be meaningful data. Plus, can we please quit pretending in these rankings that nobody uses TAs. Come on. TAs per course should be one of the columns of the rankings.</p>
<p>Meaningful per student endowment and per student spending: The way this is handled is another joke and woefully inadequate. Let's come up with a standardized way of reporting and honest per undergrad endowment and per undergrad spending. Stop burying the undergrad date in an avalanche of money going to the med school.</p>
<p>A measure of diversity: Pretty simple. How about the percentage of non-white and international students. Nice simple column in the rankings. Count it.</p>
<p>A measure of net price: The actual average per student net price charged. All colleges calculate this. Give us a column with the data. </p>
<p>A measure of outcomes: The PhD completion data is already available. Colleges could publish real med school acceptance rates and law school acceptance rates if the Common Data Set provided a standardized method. Do it. Let's see some outcomes included in the ranking weights.</p>
<p>IHEP report on college and university rankings includes a monograph by former USNWR managing editor Alvin Sanoff :</p>
<p>The worst unintended consequence of the USWN rankings is that it has become the 'invisible hand' that underlies the strategic planning process and action plans of universities, but especially non-taxpayer subsidized LACs. The higher-ranked LACs become the the benchmarks for the lower-ranked colleges. They become the 'standard', so to speak. The LACs that are ranked 25 to 50 benchmark those ranked 25 to 1. Those ranked 50 to 100 benchmark those ranked 25 to 50. As the strategic literature suggests, this neurosis leads to a homogenization of strategies and tactics.
The blessing of the American college system by the first half of the 20th century was that there was such diversity in strategy and tactics. There was intense debate as to what liberal education was, and colleges took a position based upon their unique resource capabilities and forged identity. Now, imitation trumps experimentation. 'How does Williams do this and that? How can we do what they do?'
Such benchmarking is healthy, but it becomes unhealthy when decisions are guided by a kind of academic 'bean counting'; increase retention rates (turnover is not necessarily a bad thing), create some classes with 100 students so that you can free-up faculty to teach classes with 20 or less students, refuse a bright student who does not score well on the SAT because it drives your numbers down, and so on, all to influence the surrogate measures deployed to rank the schools.
What the lower-ranked LACs do not realize (simple strategic principle) is that the high ranked schools attained their Brand-ID over many, many decades. They can coast on their brand ID, or, if the college is really good, they will always be a step ahead because they have the 'deep pockets' to develop programs and offer scholarships to those with strong HS records. The lower ranked schools can never really catch-up, as they are continually reacting -there are exceptions. These lower ranked colleges would be better off refusing to submit their data to any third party institution that deploys poor measurement techniques, but, more importantly, they need to stop benchmarking and begin innovating. The college claims for itself competency in reflexive action, but they seem unable to apply their critical thing and reflexivity to themselves, as managed organizations. We would be the beneficiary of experimentation, which, I suspect, would have the secondary benefit of getting consumers out of the mind-set that colleges can be rank-ordered.</p>