Are public universities hurt or helped by USNWR methodology?

<p>Also, the money spent per student can be misleading... let alone the endowments per student. (Private schools need larger endowments because they lack public funding.) As for money spent per student, you should consider economies of scale and how the money is being spent. If you go to a school with 1,000 students that spends $50,000 per student and I go to a school with 25,000 students that spends $40,000 per students, how can you say that the school with 1,000 is better?... this is not an ABSOLUTE measure. You have to consider the context of these numbers.</p>

<p>In my opinion, when one ranks public schools they should rank honors programs seperately from the rest of the school. While many people have been saying that public schools receive an advantage in the rankings because they are so researched oriented, especially in comparison to private schools, this is not necessarily so. A large public school may sound as if it is so research oriented because of the large number of research grants it awards in comparison to the small number of research grants awarded at small private schools. However, this is primarily due to the large number of students at the school. When considering the number of students at the school the numbers are actually not that large. For example, at University of Minnesota Twin Cities they award 500 undergraduate research grants each year. However, they have 29,000 undergraduates. Thus, they award research grants to about 1.7% of their undergraduates each year. Therefore, they actually do not award that many grants even though it may appear that way when comparing the 500 grants they award each year to the mere 75 grants University of Chicago would have to award in order to match that percentage (Sorry I cannot find any source detailing the number of research grants University of Chicago awards to undergraduates each year, but I would be willing to bet it is more than 75). BACK to my Honors Program idea, I think many public school honors programs match in quality, the educations provided by many top 20 schools. When looking at University of Minnesota Twin Cities and its #67 ranking on USNews, it appears that it is somewhat mediocre, at least in comparison to the "top 20," but that is definately not so. I think its Honors Program provides an excellent education with numerous opportunities. It provides the small class sizes, abundant research opportunities (for most of the grants go to Honors students), scholarships (making it even cheaper than it appears, for in UMN's case most of the current student scholarships are reserved for Honors students, and Biology Honors students in particular) and graduate professors characteristic of top 20 schools and uncharacteristic of University of Minnesota. Additionally, UMN provides good athletic teams and quality orchestras that many smaller schools lack. For example, WashU and MIT both have mediocre orchestras (I am going on what I have experience with or I would draw more examples). </p>

<p>I think mommusic brought up an excellent point when she said that each department of a school should be considered seperately. Just as they do with graduate school rankings, they should make undergraduate rankings specific to each program. For example, instead of ranking University of Minnesota as a whole, their College of Biological Sciences should be ranked seperately from their Carlson School of Management (Sorry I am going to use University of Minnesota for most of my example because that is where I will be going next year). With University of Minnesota, the middle 50 for the math portion of the SAT is 580-690. However, for their Institute of Technology the average score is a 760. Also, their middle 50 for the university is 23-28, whereas the average institute of technology student scores a 32 on the English section and 34 on the math section (I would post the other sections except that the website doesn't post them, sorry). </p>

<p>I also think the rankings should reflect the opportunities available to students at large public schools. </p>

<p>Could someone please include their only personal take on the rankings, including as many schools as you want, and making sure to go as far down as University of Minnesota, please : ). Then, could you briefly, or thoroughly, whichever you prefer, explain the criteria from which you base your rankings. </p>

<p>Instead of job placement, or in addition to it, I think rankings should reflect students' satisfaction and fulfillment resulting from and associated with their education. For example, at University of Chicago, students sometimes have trouble getting into graduate schools because of their deflated grades. However, all the students seem to be so satisfied by the educations and fulfilled by their growth that the initial struggles they may face seem trivial in comparison to the everlasting rewards afforded them. </p>

<p>Also, someone said something about awards not being important. I completely disagree. If one is conducting research alongside a professor, as is usually done as an undergraduate, it would be nice to know that the professor is a leader in his or her field. I realize that an award is not necessary to being a successful researcher, but I also understand that it would be nice to know where your professors stands and to better comprehend the quality of their instruction. </p>

<p>A final thing that I think would be extremely useful would be to have rankings, or at least information and statistics, presenting the changes students experience in their GPA's from highschool to college, and also what the average GPA's are for students currently attending the colleges. Or also, what the average GRE, MCAT, LSAT and DAT scores of the current students are. Maybe even, the percentages of students applying to top 20 schools in their fields, that get accepted.</p>

<p>
[quote]
These are absolutes regardless of institutional type, eg, a stronger student body is better than a weaker one. A lower student/faculty ratio is better than a high one. Having more money to support students and faculty is better than having less money. Would you support the other side of any of those choices and consider that a virtue in a college?

[/quote]
</p>

<p>Now you're being funny. You can't seriously expect me to argue that lower median SATs are definitely better, etc.</p>

<p>However, I do believe that these measures can be less meaningful--and hardly even relevant, in some cases--for some students. For just one example, a student who plans to major in engineering may be smarter picking Illinois over other choices, even if Illinois has "weaker" students according to US News, even if Illinois has a "worse" student/faculty ratio. </p>

<p>And I would absolutely support students choosing making decisions in the other direction if they personally felt the fit was better at a "lesser" school with "weaker" students and "poorer" resources. </p>

<p>I have to keep rubbing my eyes to see if it's you arguing this, frankly, because not too far back you were making a strong case that if only we could survey employers and find out what they value, students would (and perhaps should) go to these "lesser" colleges in droves. Did you have a change of heart, or am I misunderstanding?</p>

<p>Would someone please consider my post? I really do believe you guys know an enormous amount about what you are talking about and would therefore be extremely interested to see an example of how you would rank certain schools. If possible, I would be interested to see where Minnesota would be ranked, and how you would rank schools for someone planning on going into math and someone planning on going into Biology, or just undergraduate universities as wholes. I know this isn't the Minnesota board, but no one on that board talks or even knows anything about the school. Please.</p>

<p>Anhydrosis2000,
I agree that context can matter, eg, one school may spend all of its money on research activities for graduate students while another is more undergraduate focused and/or focused on non-technical areas of study or resources for classroom instruction. </p>

<p>AM040189,
You make several excellent suggestions (separate status for honors students, rankings of undergraduate departments, student satisfaction, grad school avg test scores, acceptance rates at competitive grad schools, etc). If only the schools had the will and the resources to make such information available. </p>

<p>hoedown,
I'm glad to see that you agree that there are some absolutes, eg, higher average SAT scores, smaller class sizes, and greater financial resources are preferred elements, and that these are among the valid comparisons for a student (or USNWR) to use in comparing colleges.</p>

<p>As I and others have stated before, the single biggest determinants of the quality of an undergraduate education are:
1. Strength of students
2. Size of the classroom
3. Strength of faculty</p>

<p>And to those I would add a fourth measure which is institutional resources and will needed to invest in and support the undergraduate student and faculty. </p>

<p>Re your question on employers, I think we both know that a college's employment strength is heavily impacted by geography and not nearly as much by the brand name of the school as some of the "elites" would like for us to believe. Graduates of schools like U Illinois and U Michigan will do just fine in the Midwest and less well as they move out of their home region. The same is true for schools all over the country, save for a very few with national recruiting power. And this is especially true for the non-technical areas.</p>

<p>AM040189,</p>

<p>I'm sorry, but I think I'm generally better informed about higher education at the macro level. I know about a lot of schools and have been on many campuses, but I cannot really speak to the specific features of honors colleges, or rank Minnesota credibly.</p>

<p>It's not that I'm ignoring your post; I just would be overstepping the bounds of my knowledge to address a lot of your questions. </p>

<p>I will say this much: Honors colleges can be a wonderful way to immerse yourself in a milieu of well-prepared, ambitious students. I think they are a nice way to try to approximate the environment you might get from a smaller, more elite school (that's generally the intent of such programs, anyway). I don't think it's really feasible to rank them separately, however.</p>

<p>
[quote]
Sakky, I think even if someone agrees with the methodology, you have to keep in mind that the authors themselves said they set it up as an EXAMPLE of how one could do a revealed-preference ranking. It's a methodology paper, where they make clear that their study merely demonstrates how such a ranking could be done. They don't represent their results as producing a reliable ranking; rather, they show how a study like theirs could produce a ranking, if done more completely.

[/quote]
</p>

<p>Again, nobody is saying that the RP paper is a complete paper. The fact is, there are no complete papers out there. </p>

<p>But again, I would reiterate that the RP paper seems to be no worse than any of the other rankings out there, and arguably better. Like I said, at least they have a working model that uses a mainstream methodology to attack one of the major problems with cross-admit data - namely that a lot of 'preference' is missing when people simply don't even apply to particular schools, or don't get into to them (hence, either way, never even having the choice of a particular school). What the RP paper does is attempts to ascertain what people WOULD HAVE DONE, if actually given the choice. Now, you can quibble about how well the model does that, and how complete the dataset is, but the fact remains that no other paper out there that I am aware of even attempts to attack the missing-preference problem at all. If nothing else, at least we should give credit to the authors for trying to attack the problem when nobody else does.</p>

<p>Yes, as a social science researcher I understand very well that nearly every study has limitations related to response rates or the size of the sample. Usually, study authors will discuss this limitation and explain why the results are generalizable (or not). </p>

<p>But this is a case where the authors themselves say this is NOT that kind of study. It's not a study that tries to be generalizable to a larger sample, or to the college admissions arena. They say it's an EXAMPLE of how you can do such a ranking. I don't know how many different ways to say it is a methodology paper, but that is what it is, that is what the authors say it is. That's a very important distinction. </p>

<p>As such, it's a valuable contribution for people who are interested in how rankings might be done, or how they might be improved. But its results are not meant to be used to draw conclusions about college preferences. You can draw conclusions about the methodology (since that is the TOPIC of the paper) but not from or about the results. </p>

<p>I am not attacking the authors when I say this, or the methodology, or the quality of the ranking that might result if a college ranking were done that used their methodology. This is not what I am arguing with you about. I am challenging YOUR interpretation of the paper's purpose, and of their findings. </p>

<p>You have urged people to read it closely. That's excellent advice, because anyone who reads it closely finds that the authors state it is a METHODOLOGY paper only (on how to do a ranking, if you had the data, which they do not), and NOT a ranking.</p>

<p>It seems to me that the USNWR ranking is very different from all other national and global university rankings that I have read. I feel that USNWR purposely keeps public universities out of the top 20 for some reason by modifying the methodologies they used every year. Now even academic is no longer a evaluation parameter. What is going on? Oxford and Cambridge are public Universities too and they are still ranked as top universities in UK and in the world!!</p>

<p>In my opinion, UCLA, Univ. Michigan, and Berkeley can easily be ranked as top 10 national universities (with Berkeley being one of top four or five) with their commitments to public education, world-class faculty members, top-notch learning and research facilities (e.g. library system …), highest % of top 10 high school incoming students, quality alumini, affordable tuition, electric and fascinating campus atmosphere, strong athletic programs and well-balanced college life, and their international reputation/names. </p>

<p>As more and more different college rankings come out, I expect that fewer HS students will choose their colleges based on the USNWR ranking. In fact several students I know in my area gave up lower Ivy League schools for Berkeley and UCLA this year. Many private universities ranked from 10 to 20 by USNWR, in my opinion, are not even close to those top public schools.</p>

<p>
[quote]
Yes, as a social science researcher I understand very well that nearly every study has limitations related to response rates or the size of the sample. Usually, study authors will discuss this limitation and explain why the results are generalizable (or not). </p>

<p>But this is a case where the authors themselves say this is NOT that kind of study. It's not a study that tries to be generalizable to a larger sample, or to the college admissions arena. They say it's an EXAMPLE of how you can do such a ranking. I don't know how many different ways to say it is a methodology paper, but that is what it is, that is what the authors say it is. That's a very important distinction. </p>

<p>As such, it's a valuable contribution for people who are interested in how rankings might be done, or how they might be improved. But its results are not meant to be used to draw conclusions about college preferences. You can draw conclusions about the methodology (since that is the TOPIC of the paper) but not from or about the results. </p>

<p>I am not attacking the authors when I say this, or the methodology, or the quality of the ranking that might result if a college ranking were done that used their methodology. This is not what I am arguing with you about. I am challenging YOUR interpretation of the paper's purpose, and of their findings. </p>

<p>You have urged people to read it closely. That's excellent advice, because anyone who reads it closely finds that the authors state it is a METHODOLOGY paper only (on how to do a ranking, if you had the data, which they do not), and NOT a ranking

[/quote]
</p>

<p>I agree, if you want to be so strict, it's not truly a 'ranking'.</p>

<p>But again, it's all relative. Whatever quibbles you may have about the RP study, I would say that it's * still more fundamentally sound * than any of the other "rankings" that people on CC are so liable to quote. I said it before, I'll say it again. At least the RP study takes a stab at attempting to use a model to ascertain people's true preferences. Other "rankings" don't bother to even do that - or to use any other rigorous criteria for that matter. Take USNews for example. Where is it written that alumni contributions have to be worth 5% of the score, and that faculty resources have to be worth 20% (or whatever it is worth)? Why can't those weightings be something else? It's completely arbitrary. Or take Gourman. Exactly what is the justification for their methodology? Or Shanghai Jiao Tong? Or any of the other rankings? </p>

<p>At least the RP survey provides a theoretical justification for their model/methodology. The other "rankings" out there don't even do that. Yet people seem to have no problem in accepting USNews or other such "rankings" as bonafide. That's the point: * it's all relative*. Yes, the RP survey is incomplete, yes, strictly speaking, it's not really a true 'ranking'. But so what? Using those standards of proof, frankly, neither are any of the other 'rankings' out there. That's why I believe that RP, for all its flaws, is the best of what is available.</p>

<p>
[quote]
Whatever quibbles you may have about the RP study

[/quote]
</p>

<p>I don't think you understand me. </p>

<p>I don't have "quibbles" with the revealed preference ranking study. Calling something a "methodology" paper is not pejorative, as least not in the social sciences. Methodology papers are an important contribution. I am not objecting to what the authors are proposing. I do, however, object to posters ignoring what the study is, and presenting at as something other than what it is and what the authors themselves state that it is. This what you are doing.</p>

<p>This has nothing to do with what I think of USNews or other ranking systems or their methods or lack thereof. That is immaterial to what I am trying to explain. Maybe what the authors are proposing IS a better ranking system. Maybe it WILL yield better results. Maybe it represents an improvement over the flawed rankings that are out there. That's not what I am discussing with you. </p>

<p>I am stating that you are misrepresenting the paper and you are misrepresenting the paper's findings. The paper showed HOW to rank colleges. It did not, in fact, rank colleges, except as an exercise to show what their proposed methodology could possibly yield. The authors state this EXPLICITLY.</p>

<p>
[quote]
don't think you understand me. </p>

<p>I don't have "quibbles" with the revealed preference ranking study. I do, however, object to posters ignoring what the study is, and presenting at as something other than what it is and what the authors themselves state that it is. This what you are doing.</p>

<p>This has nothing to do with what I think of USNews or other ranking systems or their methods or lack thereof. That is immaterial to what I am trying to explain. Maybe what the authors are proposing IS a better ranking system. Maybe it WILL yield better results. Maybe it represents an improvement over the flawed rankings that are out there. That's not what I am discussing with you. </p>

<p>I am stating that you are misrepresenting the paper and you are misrepresenting the paper's findings. The paper showed HOW to rank colleges. It did not, in fact, rank colleges, except as an exercise to show what their proposed methodology could possibly yield. The authors state this EXPLICITLY.

[/quote]
</p>

<p>I am well aware of what the authors have stated explicitly. </p>

<p>But that's not the point. The point has ALWAYS been that the RP authors, even though they did not set out to do so, STILL came up with a methodology that is BETTER than the existing ranking systems who DO purport to actually rank colleges. Hence, once again, it's * all relative *. </p>

<p>Again, look at the RP methodology - its theoretical underpinnings are clear and mainstream. What are the theoretical underpinnings of the other ranking systems? Did the creators of those rankings even bother to tell us what the underpinnings are? Look at the RP model itself - again, based on mainstream techniques that follow naturally from the theory. Is that the case of the other ranking systems (presuming that there actually were theoretically underpinnings for the methodology to flow from)? </p>

<p>THAT's the point. Yes, the authors didn't deliberately set out to create a strong ranking system. Nonetheless, THEY DID SO ANYWAY. And yes, I agree with the caution expressed by the authors that it isn't a completely fleshed out model. That's the standard cautionary and conservative mindset of a proper academic. But, frankly, it's STILL ALREADY BETTER than the other rankings that are out there. I don't see USNews or Shanghai Jiao Tong or THES expressing any caution about the veracity of their results. Yet I don't see you giving them any flak about it. Why not? </p>

<p>Look, nobody is saying that the RP study is perfect. Of course it is not. Heck, nobody is saying that it is even half-perfect. All we are saying is that it is better than what else is available. Let's be perfectly honest here - the existing rankings are not that good. We should give credit to the RP authors for creating a ranking that is almost certainly better. Far from perfect, but still better. </p>

<p>I personally find it interesting indeed that the RP study gets so much resistance for not being a "complete" study, yet the same detractors never seem to be particularly incensed by the problems in the other rankings. Why are the other rankings accepted as being "complete?" Because the authors of those rankings said so? So because USNews and Jiao Tong and THES all say that their rankings are "complete", we have to take that as face value, but when the RP authors say that their paper is incomplete, then that's a reason to ditch RP entirely?</p>

<p>NY05405,
I am intrigued by your comments about public universities. California is blessed with several very high quality colleges and many terrific students and I suspect that this shapes your perspective. What aspects of the USNWR methodology do you feel unfairly reflect the public universities? I have pointed to several areas where I feel that the publics are advantaged (heavy weighting of Peer Assessment at 25%, 6-Year Grad rates rather than 4-year rates, heavy weighting of % of Top 10% ranking students at 6%, low weighting of acceptance rate at 1.5%) and one where they are disadvantaged (Alumni Giving at 5% weight). </p>

<p>I also agree that cost should be a consideration and the publics do have an advantage here, but for non-Californians, the numbers of OOS students going to the premier UCs (UCB, UCLA, UCSD) are pretty small (7%, 7%, 3%), so is it that relevant a statistic? </p>

<p>Finally, a lot of your comments seem to reflect a graduate school and heavy research perspective. Do you have a view on how important those things are to a university’s undergraduates, particularly those not involved in the technical areas? </p>

<p>As for your comments about the privates ranked 10-20 by USNWR, do you know much about these schools and their students and have you compared their quality to the schools you reference (UCLA, UCB, U Michigan)? These private schools are replete with students who “gave up” the lower Ivies to matriculate there. Take a look at their profiles and I think you will see that the quality may be higher than you realize.</p>

<p>No, I think revealed preference is an interesting way to approach college rankings. I am not "ditching" RB, either entirely or in part. Again, my complaint is how the RESULTS of this paper are being misused on CC. </p>

<p>Is it possible you are confusing me with others on this thread who have criticized the methodology, such as dstark or barrons? My complaint is not with RP. My complaint is with people who look at the study and say, “Look, XX is a preferred school because the Avery, Glickman, Hoxby, et al study found it was.”</p>

<p>I have been looking over this thread to try to figure out why we are talking past each other—I agree with you in post #117 where you stress that the paper is presenting a MODEL. I guess it’s later posts where you begin to talk about it as if it is also a "FINDINGS" paper, such as saying in post 122 & 129 things like, “the paper would be better if they had a larger sample.” Or “the paper is incomplete.” I guess that’s true in one sense, but I would argue somewhat otherwise—if they had a bigger sample it would be a DIFFERENT paper. Not just a methodology paper (as it is now, and as the authors say it is) but also a first look at what kind of ranking such a methodology would result in, and what those findings suggest about the accuracy of other rankings like US News or Gourman. </p>

<p>Honestly, I don’t think we are so far apart, it’s just that when I see you (or anyone) saying things like “the RP models shows that Williams is more (or less) favored than School X”…that’s when I think the paper is being mis-read and mis-used. You may understand the paper better than most, and I’m glad for that, but I remain puzzled that you seem to challenge my (and the author's) assertion that this paper is merely showing HOW RP might be used to generate a ranking.</p>

<p>Please understand--I am not the one "resisting" the study. I can't speak for those who dislike the methodology. I am simply frustrated by the misuse of the paper and the analysis of their example as if it were a finding.</p>

<p>Hawkette,</p>

<p>Thank you for posting this interesting thread. I learned a lot from you and others who have contributed here. However, I think this debate will continue for many years to come.</p>

<p>In my opinion, the methodology USNWR uses for ranking American colleges is unfair or one-sided as it is based exclusively on many things that may not be relevant to the quality of a program, thereby producing a rank system which still misleads some HS students.</p>

<p>Here are some my personal opinions (may not be all correct but different from yours!) on what USNWR has done unfairly and what need to be corrected in the future. </p>

<p>1) USNWR should not call the undergraduate programs ranking “America’s best colleges” or “Top National Universities”, which really mislead students. It should use “best undergraduate programs” as it does for “PhD graduate programs”. Do you agree?</p>

<p>2). Alumni giving is a tradition for privates not a measure of program quality. Alumni giving rank and alumni giving rate are so redundant and repeatedly used. They should be eliminated all together and replaced with “ alumni achievements” or “successful and notable alumni” like awardees or people who have accomplished so much for the society using the knowledge they learned by attending their schools. $$ should not play any role here. </p>

<p>3) “Selectivity” is very subjective because it is decided by schools. You can improve it by reducing number of admitted students. But that does not automatically improve your school quality. For example Berkeley received 56000 applications in 2007 and has tried to give many quality students the opportunity (commitment to public education!). If Berkeley had decided to admit only 2000 students like what large privates do, the selectivity would have been 3.5%. USNWR can keep “selectivity” but need to consider “number of applicants per year” in the methodology as well! “Number of applicants per year” is decided by HS students showing how attractive a college is. It will also offset “SAT/ACT Percentile”, which is related to number of admitted students.</p>

<p>4) Any educated person should know that student/faculty ratios, # of classes with fewer than 20, and # of classes with 50 or more are essentially used to evaluate one thing. Why is so redundant here and taking that much weight? If you want, you can add another category say “# of classes with fewer than 18”. USNWR knows what it’s doing. I personally believe that all these three categories should be eliminated and replaced with “quality of faculty members”. Let me give you a worse-case scenario. Pretend you have a not good enough professor teaching you one on one, do you think you can learn a lot? Undergraduate students are supposed to be all grown-ups and colleges need to prepare students for the real-world. What I really mean here is that “baby sitting” is actually not appropriate for students at this level.</p>

<p>5) Graduation/retention ranks, average freshman retention rate, and 2005 predicted graduation rate (who care the predicted rate!) are again all about retention. Why are they so redundant here? Some schools have higher graduation rates simply because they give failed students multiple chances helping them to graduate ... It’s not something that a quality program should do.</p>

<p>6) Add “international reputation” in the methodology, because everything now goes globally. </p>

<p>7) Add “learning resources and research facilities” library system. These are very important for students not only in their senior year but throughout their college study. For example, Berkley’s library system is the third largest in the nation behind Harvard and Yale. Without good enough resourecs, how can you succeed? Research Universities are operated at a higher level in Education and trying to prepare students better to be visionary and compete for jobs after graduation. Privates’ students usually depend heavily on “alumni connection”.</p>

<p>8) Put “academics” back to the methodology. This is what students go to college for.</p>

<p>Really it depends on what kind of rank you want to see. By changing the methodology, you can get totally different ranks. I am just glad to see that many other ranking systems are coming out to challenge the USNWR system. Hopefully the influence of USNWR on HS students' decisions soon won’t be as much as it used to be.</p>

<p>NY05405,
We agree on most of the points you raise:
1) I agree. Titling to specifically reference undergrad would definitely be an improvement.</p>

<p>2) Alumni Giving. I agree with the suggestion to drop this if USNWR wants to rank publics with privates. This definitely favors privates. However, I think the value added of "famous alumni" is close to zero. I would probably drop the category altogether. </p>

<p>3) Selectivity. You focus on acceptance rate, but this is a tiny factor in the rankings (1.5%). Top 10% is, to my mind, not particularly useful as we don't have any way to judge quality across school districts and its weighting is several times that of acceptance rate. </p>

<p>4) a) Class sizes. I think we disagree on this. After the quality of the students, I think that class size is the single biggest factor in an undergraduate education. I know that this measurement works against publics and frankly is a key argument for why a private education might be a better alternative. I do not consider small class size as "baby-sitting." I consider them a positive attribute that gives the student and the instructor the best conditions in which to learn and teach.</p>

<p>b) Quality of faculty members. Admirable idea that I agree with in concept as I believe that the third most important thing (after student quality, and classroom size) in an undergraduate education is quality of teaching. Notice I said teaching and not research. At nearly every college in America, students/teachers in non-technical fields of study greatly outnumber the technical areas. Yet the statistical measurements done for faculty measurement nearly all deal with technical research (and much of it is related to graduate activity). How do you measure faculty quality? Is it research based or is it reflective of what actually happens in the classroom? I have posted several times that a broad evaluation of faculty would be my preference, including reviews by other academics, students, alumni and employers. That would likely produce a different list of "quality faculty" than one based on research activity.</p>

<p>5) Grad/Retention ranks. I agree with you and think the weighting of this is much too high. I think that the Ivies benefit most from this measurement. Freshman retention is very low value added for at least the Top 50. Grad rates also can reflect grading patterns and not a school's efforts to get its students out on time. If USNWR is going to measure grad rates, I would probably suggest adding 4-yr to the 6-yr measurement. </p>

<p>6) International reputation. I'm not sure about this one as it goes right back to the research issues raised earlier and further reinforces the whole "prestige" line of thinking (which I personally think is overrated as telling anybody anything meaningful). How would you measure it?</p>

<p>7) Learning resources and research facilities. Sounds reasonable to me. As noted previously, I think that the internet will ultimately obsolete a lot of the library facilities. How would you measure this and use this if you were a student from both the technical fields and then another study looking at the broader social sciences?</p>

<p>8) Academics in methodology. Students go to college for many more things than academics. I would say that most go just to get a job. Many go in order to build a social network that will benefit them personally and professionally for the rest of their lives. Others go just to study and become scientists or something academic. Also, how would you propose to measure academics?</p>

<p>Re other areas of potential evaluation, I think that cost of atttendance and job placement are two obvious shortcomings of the USNWR survey. Students really want to understand the cost-benefit relationship and right now, it's very hard to figure out just what College A offers in terms of potential expected outcomes vs what College B can offer. To me, that is a lot more relevant than how many NAS faculty members there are or how much money a school receives from the Feds to do research, and this is particularly so if I am one of the great majority of students pursuing a non-technical degree.</p>

<p>
[quote]
No, I think revealed preference is an interesting way to approach college rankings. I am not "ditching" RB, either entirely or in part. Again, my complaint is how the RESULTS of this paper are being misused on CC. </p>

<p>Is it possible you are confusing me with others on this thread who have criticized the methodology, such as dstark or barrons? My complaint is not with RP. My complaint is with people who look at the study and say, “Look, XX is a preferred school because the Avery, Glickman, Hoxby, et al study found it was.”</p>

<p>I have been looking over this thread to try to figure out why we are talking past each other—I agree with you in post #117 where you stress that the paper is presenting a MODEL. I guess it’s later posts where you begin to talk about it as if it is also a "FINDINGS" paper, such as saying in post 122 & 129 things like, “the paper would be better if they had a larger sample.” Or “the paper is incomplete.” I guess that’s true in one sense, but I would argue somewhat otherwise—if they had a bigger sample it would be a DIFFERENT paper. Not just a methodology paper (as it is now, and as the authors say it is) but also a first look at what kind of ranking such a methodology would result in, and what those findings suggest about the accuracy of other rankings like US News or Gourman.

[/quote]
</p>

<p>Yes, it is of course true that if they had a larger and more representative sample size, the paper would be different. No doubt - nobody disputes that. </p>

<p>But again, I return back to my original point. Even though, yes, strictly speaking, it is just a methodology paper and not a paper that presents true findings, the example findings that the paper does present * already * casts serious doubt upon the other ranking systems, and hence is, at least in my opinion, * already better * than the other rankings. Granted, that's probably because those other rankings have severe defects, but * I think that's the point *. </p>

<p>So, while I certainly agree with you that the paper presents only sample 'findings' in the academic sense, and is indeed strictly only a methodology paper, frankly, the sample findings are arguably better than the "true findings" of the rankings. That's why I tout the RP paper - not because I think the paper is complete or perfect (it certainly is neither) - but because it's better than the rankings.</p>

<p>Look, the authors of RP are highly conservative in what they claim that their paper finds, and this is proper because academics should rightfully be conservative. Fine. But the authors of the rankings are * not conservative at all* in their findings. USNews claims to have found a system to ranking "America's top colleges". So do Gourman and Jiao Tong and THES and all the others. Hence, given that the alternatives all claim to present findings that represent true rankings, I don't think it is unfair to compare them to the RP findings.</p>

<p>
[quote]
3) Selectivity. You focus on acceptance rate, but this is a tiny factor in the rankings (1.5%). Top 10% is, to my mind, not particularly useful as we don't have any way to judge quality across school districts and its weighting is several times that of acceptance rate.

[/quote]
</p>

<p>To this I would have to add that selectivity also should include selectivity of transfer students. Certain schools that shall remain unnamed have student bodies that largely consist of transfer students. Yet, as far as I can tell, these students are ignored when USNews measures selectivity. </p>

<p>
[quote]
5) Grad/Retention ranks. I agree with you and think the weighting of this is much too high. I think that the Ivies benefit most from this measurement. Freshman retention is very low value added for at least the Top 50. Grad rates also can reflect grading patterns and not a school's efforts to get its students out on time. If USNWR is going to measure grad rates, I would probably suggest adding 4-yr to the 6-yr measurement.

[/quote]
</p>

<p>On this score, I'm afraid I have to disagree with both of you. If anything, I believe graduation and retention rates should be weighted HIGHER. After all, what's so great about a school who admits students who aren't going to graduate/not be retained? Or put another way, as a prospective student, then ceteris paribus, you would prefer to attend a school that you have the greatest chance of graduating from. After all, you don't just go to college just for the hell of it. You go to college to get a degree. </p>

<p>I agree that the Ivies benefit greatly from this measure. But I think that's exactly the way it should be. It indicates that the Ivies are very safe schools that provide plenty of support and second-chances. If you're a student, that's * exactly * what you want. It's a LOT better than going to a school that doesn't really care if you make it or not, and that has no problem in flunking you out. I've certainly seen schools who have that sort of attitude about their students. </p>

<p>As far as your notion that grading patterns affect graduation rates, I would say that your fear is unfounded. As long as the employers * think * that your graduates are good, then it, frankly, doesn't matter if your grading is 'soft'. As a case in point, take the engineering programs at Stanford and at Caltech. I think most people would agree that the Caltech engineering program is more strict/rigorous and grades much harder. Yet, Stanford engineering is still ranked higher, and Stanford engineering grads get very good jobs - arguably better jobs than the Caltech engineers get. Stanford engineering grading may be relatively 'soft' - but who cares? It works. Stanford is living proof that you don't need to use harsh grading to produce highly regarded engineers. </p>

<p>The sad part about the situation is that some people who go to Caltech will flunk out, when if they had instead gone to Stanford, they might have made it. Sure, maybe they would have gotten only mediocre grades at Stanford. But hey, at least they would have graduated.</p>

<p>sakky,
You and I have gone round already on this graduation rate issue and its heavy weighting by USNWR, so I won't persist with this although if others would like to weigh in, I'd be interested to read some other viewpoints.</p>

<p>On the issue of transfers, I completely agree. Same with the September/February admissions policies that some schools use. The standards are almost certainly different for students entering at these times and they elude the measurements that USNWR uses to measure student body strength and selectivity.</p>