2008 US News Rankings

<p>Hawkette. And, by the way, lots of high profile teachers absolutely do teach.</p>

<p>I'm sure that many high profile teachers teach. That has never been my point. My point is to see if their teaching performance (which is what really matters to the undergraduate student) meets their reputation among academics. You are saying to "just trust the academics." My response would be, in the words of Ronald Reagan, to "trust, but verify."</p>

<p>The point is if great teaching = higher Peer Assessment score.</p>

<p>Ok. I agree with "verify." Why not? But not with the students' opinions. Again, the students are very valuable. But, not, in my opinion, in a national ranking. It is really so interesting how we come from different perspectives...How does the saying go? Something like "...never the twain shall meet...???"
If there is a better way to do peer assessment, I am certainly all for that. But no peer assessment for me means no value to the USNWR rankings. If I can't trust the experts in the field...and, no...for me the students do not fit into this equation for these purposes...then the rankings are meaningless to me, and I would have to seek other credible opinion.</p>

<p>No, great teaching does no = higher PA. Great reputation = higher PA. </p>

<p>If anybody here has found a way to rank universities according to quality of instruction please let me know. I assume you will take into account different individual learning styles (there are only 1,000 or so such styles). A good professor will find a way to appeal to all 1,000+ learning styles. I also assume that such a ranking will find a way to measure each instrcutor's passion for teaching undergraduate students.</p>

<p>hawkette, the problem with student ranking is that students have no yardstick to compare their school with (except for a few transfers, and event they only have experienced one, or at most two, other schools). Besides, students at different colleges have very different levels of expectation. Students at Harvard may very well require a much higher level of teaching to satisfy themselves than students at Podunk U. And students may often rate on factors that have little to do with the quality of the education they receive, and more to do with their grades and the amount of work (or lack thereof) that they do. Student assessments have some use for comparing the teaching of professors within a single institution (particularly if you control for average course grade), but little value in comparing institutions to each other.
The problem with employer assessment is that, frankly, it is likely to end up as basically a proxy for selectivity. Employers like smart, hard working employees. Schools that are more selective to begin with will have more of those regardless of educational quality. I don't think an employer ranking will really reflect the quality of education.
Of course, none of this makes PA a terrific assessment of undergraduate education. Anyone who pretends that it directly measures the quality of teaching is deluding themselves, as no administrator can claim to have sat in on enough classes at all of his peer colleges to judge the teaching himself, or even to have met very many students from each of those colleges. It's probably not even that great a direct indicator of research strength (though this indirectly creates the prestige that does drive PA). That would be better served by asking a broad range of active faculty members to do the rankings. But again, such a ranking would probably not be that relevant. The main purpose served by PA is to keep top public universities at a decent position in the rankings. Though it's hardly an objective purpose, I'm not so sure that it's really that bad of one.</p>

<p>One thing usnews should do to show undergraduate strength is to include percent of courses taught by faculty in the rankings.</p>

<p>Hawkette, your insults no longer have an effect on me. I have grown acustomed to them by now. But if I had no interest in the wellbeing of undergrads, I would not waste my time on this forum. I care immensely for the future of students and that is precisely why I frequent these boards. But I also I believe learning is an individual pursuit. Each student must chose a path that fits her/his needs and once chosen, must strive to make things happen. All I was saying is that measuing the quality of education is impossible and that the PA measures the reputation (not prestige mind you) of universities as seen through the eyes of leading academics. I don't see how any of those concepts is controversial.</p>

<p>Hawkette. The answer to your question is certainly not students, and certainly not employers. I am an expert oriented person. I trust credible authority. I think that the people who run USNWR, although probably not doing a perfect job in amassing relevant data, are pretty smart people. Chairman and Editor-in-Chief, Mort Zuckerman is a very impressive man, who is certainly no fool. I guarantee you that he did a fine job educating his own children, and I believe that he is perfectly capable of assuming and assuring a reasonable amount of quality control over his product.
Here is what I have faith in: That since the people, in my opinion, who publish this report are intelligent, very accomplished people, I will put my trust in the way they go about obtaining the peer data. I will let them figure out how best to control the quality and accuracy of their information. One thing I can tell you. Mr. Zuckerman would certainly disappoint me if one day he decided to include student and employer opinions of undergraduate teaching to "balance out" the opinions of the professionals that the editorial board deem worthy.
One thing you will have to take on faith. I am very familar with the way that newspapers and other news oriented publications work. I know from lots of experience that they are not the most honest brokers, often twisting the truth, and fabricating stories to sell their publications. This, I know only too well. So I am the last person to be Polly-Anna-ish about the goals of these publications, which is to sell, sell sell. They are businesses, after all. I know all of this. But one has to rationally judge what the purpose would be to putting out information that could mislead and adversely affect many thousands of children and their families. None. There is no bad intention, here, on the part of USNWR. Their goal is to educate the public about colleges and universities. Does that mean that every little ranking has to be taken so to heart? If a peer assessment is a 4.6 or a 4.3, is the reader meant to eliminate a particular school because that assessment is some tenths of a point lower than the other? Of course not. These are general findings, meant to be looked at broadly. No one should have her child choose a school because it has a 4.6 rank over a 4.3. That is just ridiculous, and I hope that does not happen. But does that mean that PA is unimportant? Of course not. This is how the reader knows what academia thinks of a particular school. And then this bit of important information is then imparted to the public. </p>

<p>Please. Let the kids and their parents use the PA data in the spirit in which it is given. Please stop confusing the issue. Let the readership have faith in professional opinion. I just cannot understand how one might think that a student's opinion is more relevant than a provost's or a college president's. Are you really serious?</p>

<p>"One thing usnews should do to show undergraduate strength is to include percent of courses taught by faculty in the rankings."</p>

<p>I think this would be an interesting statistic. However, the USNWR must clearly define what they mean by "courses". Are we talking purely about lectures or about lectures and discussion groups. The PR already does this, but most public universities (with the exception of the UCs), include all courses including discussion groups. Most private universities (and some publics like Cal) only include lectures. The result is disastrous for the former (65%-75% "courses" tought by professors) and impressive for private universities (95%-100% of "courses" taught by professors). </p>

<p>Either way, in most cases, universities will have very similar stats, but apples must be compared to apples to make it work, something the USNWR does not do in most other criteria rankings.</p>

<p>hawkette;</p>

<p>don't contradict yourself...</p>

<p>
[quote]
Re the problems that I am trying to address, how does the current ranking system using PA account for the following situations involving high reputation faculties?</p>

<p>1) they don’t teach undergraduates...

[/quote]
.</p>

<p>In fact, it WAS your point.</p>

<p>I have not closely followed this debate so please forgive a few errors.</p>

<p>TA teaching is not necessarily bad. A TA from East Asia who cannot speak English would be bad. A TA from Oxford, who was a debater in the Oxford Union and a star Oxbridge grad, may be the best teacher around. There are not 1000 learning styles. I don't know how many precisely but no more than a few. Experimental, cognitive psychologists find few. The touch feely education types are forever finding new ones, one per week.</p>

<p>Student assessment is useless or close to useless. Most popular course at Harvard: psych of happiness or some such thing. Beautifully pandered by title, and theatrics to the immature, impressionable 18 yr old. Education, as wisdom, is indeed wasted on the young. If profs were to choose a set of courses that each student must know and students were to choose, the former will have fewer fluff courses. Also, student ratings of good teachers or teachings will have little meaningful basis.</p>

<p>PA is a useful metric. But must be narrowed and refined so that the raters are those who intimately know the schools they are rating, or sufficient depts, all rate on same basis and same weight. Difficult to achieve in practice. Not comparable to best surgeons ,etc as someone keeps mentioning. In surgery, etc most surgeons know their counterparts, attend conferences with them, etc. OK if an aeronautical engineering prof were to rate other aero profs but not OK if the aero prof, who is now a provost, rates Yale which has little engineering and no aero, etc.</p>

<p>Employer assessment quite useful for non-academic measures. They are savvy. Spoke to engineering employers since my son is interested in engineering. They rated Purdue and Michigan higher than MIT engineers. Uniformly said they had better work ethic, were humble, could take direction, will stick around, etc.</p>

<p>I find the World Rankings (from S.T. univ in Hong Kong) and the Florida center for measuring Univ perf useful. Very objective. There are many vital factors that cannot be measured: example, teaching quality. Instead of pretending to measure them with flawed measures why not confine oneself to that which can be.</p>

<p>Output factors: no going to top grad and prof schools, winners of Rhodes and other prizes, no going to top 20 employers, no earning PhDs, etc very important and can be measured. Thanks</p>

<p>Why not just replace PA with a more stringent Revealed Preference Ranking, include everything that ramaswami mentioned at the end, and get rid of alumni giving rate?</p>

<p>We can argue all we want about the strong correlation between the strength of entering students and the strength of a university - but that is immaterial. The most academically inclined will want to go to a place which is more academically inclined. This helps to "enhance" the student body and the quality of peer engagement - something very important to impressionable 18 year old. Thus, I agree with rama's suggestion.</p>

<p>bluebayou,
Something is getting lost in the translation. Yes, I am asking about situations where these things happen (profs don't teach undergrads or do so only in very large classes or are lousy teachers), but that is not meant to say that this is the norm for profs. I am only asking for where these situations exist, student opinion could be helpful in revealing where there is a possible mismatch of reputation among academics and what the undergraduate studetn actually encounters.</p>

<p>gabriellah,
I concur with your comments on USNWR and media in general as it applies to publication. Their job is to increase sales as much as possible. I don't share your view that the editors at USNWR have all-seeing wisdom and thus their decision to collect only data from academics is the only way to go. </p>

<p>The process and the results and so much of what happens in academia seems to me so heavily slanted in favor of the institutions and particularly the institutional status quo. The "truth" (reputation vs practice) about what an undergraduate will experience on a college campus will happen when the voices of students and employers are heard. I liken this to the three branches of government and how they act as checks and balances. While not the greatest analogy, it seems that we only have one branch making all of the decisions right now and the opinions of the others are ignored or valued lowly. Employers can take care of themselves because they have the ultimate power in terms of being able to offer jobs and they have plenty of places to draw students from. But who speaks for the students? The editors at USNWR? The institutions who are often more concerned with their research efforts than teaching undergrads? If I were the students, I'd hire a lobbyist.</p>

<p>D.T., I definitely agree that the PA can be sharpened. I think the first step is to make it more transparent rather than anonimous.</p>

<p>An employer assessment score would be very useful for the career-conscious. It would be important to divide the score into specific industries (Manufacturing, Pharma, Biotech, Aerospace, Management Consulting, IT, Investment Banking, Oil & Gas, Government etc...) and geographic areas (Northeast, Midwest, Great Plains, Northwest, West, Southwest, South, Southeast and Mid Atlantic). </p>

<p>Output factors are certainly very important indicators. Detailed graduate school matriculation information (not just for top 5 programs and for professional programs, but for top 25 programs assigning more weight for more highly ranked programs and accross all disciplines) in-depth employment statistics (accross all industries and regions), prestigious award winners (Rhodes, Marshall, Truman, Nobel, Fields etc...) and not just for one or two years, but over the course of a decade.</p>

<p>Hawkette. I very much believe that employers opinions should be considered, but not on the undergraduate level. I think that those opinions do not add anything because of the nature of what kids do after college today, and because, most often, careers are not really established until after grad school.
If, however, USNWR were to consider such a ranking, I think Alexandre is on the right track, in terms of dividing up the business sector. But again, this is probably better saved until students have settled on a career, which is more likely than not, after grad school.</p>

<p>gabriellah,
Many have posted here a desire for output measurements and that is partly what prompts my suggestion of an employer role in the evaluation of faculty. My belief is that employers DO know where the top students are and they also know which colleges do the best work at preparing students for postgraduate life. We all spend a lot of time debating the usefulness of various factors for college admissions, but perhaps the more important issue is really what does the student get out of it-work, grad study, whatever-and none of that is usefully measured anywhere that I know of (unless you consider the WSJ feeder rankings as useful). An employer survey (with rankings consequence) would be a good start. </p>

<p>One benefit of an employer survey is that I think it would lessen a lot of the frenzy in college admission by revealing the extent of the role that more regional colleges play in the make-up of entering positions in a variety of companies across a variety of geographies. You know that I have often argued that recruiting is a lot more regional than top schools would have us believe and such a survey would either reinforce or disprove my view.</p>

<p>Can someone please tell me the criteria that USNR uses and what weight they use for each factor. </p>

<p>thanks</p>

<p>The U.S. News rankings have been fatally flawed for quite some time now. The rankings award certain types of institutions by using statistical metrics that may not have any relevance to educational quality. People who study this have pointed out many fundamental concerns, including institutions even going so far as to attempt to boost their rankings in the short term (e.g., by taking out loans to build new facilities) even if it might make them go bankrupt in the long term.</p>

<p>A few concerns (note, a small portion of the text below is taken from informal personal discussions with University provosts, not directly from me):</p>

<p>-- Awarding ranking points for the percentage of faculty who are full-time and/or who have Ph.D.s, penalizes universities and colleges that have particularly strong or large arts or music-related programs, as faculty in those areas often have special term appointments and also often lack Ph.D. or equivalent degrees. In one case, an Midwestern institution developed strong partnership agreements with several local school districts and their teachers who participated in the design and delivery of teacher preparation programs. While U.S. News considers them "adjuncts," they can be seen as a strong asset for the program, even though they are part-time. They bring the real-world applications to the table to balance the theorists. </p>

<p>-- The point system measuring student to faculty ratios fails to take into consideration certain factors, even at large universities like Harvard and Yale, such as cases where faculty in various professional schools (e.g., business, law, medicine, architecture) may teach undergraduates yet are not counted as full-time “faculty resources” or towards the student to faculty ratio because their full-time appointments are not within arts and sciences. This is one of those areas that is done very inconsistently among institutions when providing info to U.S. News. </p>

<p>-- Measures of class sizes using percentages and particular cut-off points are easy to manipulate. More meaningful measures could easily be developed. Statisticians know that we can have 10 classes with 1 student and 10 classes with 39, for an average of 20, which in no way reflects the experiences of any of the students. </p>

<p>-- Awarding points based on faculty salaries punishes colleges and universities with strong arts, language or humanities programs, because science and business professors tend to draw significantly higher salaries on average. Schools like Johns Hopkins or Purdue that have a relatively high(er) percentage of their faculty teaching in the sciences get an enormous boost while other universities suffer. This punishes schools for hiring an additional English professor versus an additional biology professor. This also punishes schools with strong religious traditions, particularly Catholic schools where you may have members of the order who take very low pay as part of their vocation. The whole idea that the more you spend, the better the students learn is pretty absurd.</p>