^My guess is someone used the full expenditure amount, without excluding the portion spending on admin.
@IWannaHelp I just reported the numbers…UCLA does pay it’s Full Professors more than it’s public research university peers (Medical Schools excluded).
University of California-Los Angeles $187,817
University of California-Berkeley $178,881
University of Michigan-Ann Arbor $164,802
And compared to UCB, they do award $50 million a year more in need based aid.
But really, no one has figured out (this has popped up in other threads) why UCLA is out of the normal UC range, especially as compared to UC-Berkeley. It could be due to UCLA using a different methodology for generating these self-reported numbers, or it could be a long list of items that add up.
On a cost of living basis, the UMich profs got the best deal, by far.
US News releases its 2017 Top World University Rankings:
http://www.usnews.com/education/best-global-universities/rankings
I think the most “objective” way (demand and supply) to rank the schools should be based on and divided into STEM and Humanities areas:
- Percentage of admitted applicants
- Percentage of enrolled students out of the admitted applicants.
- Test scores, GPAs and percentile in high schools
Based on the above, I would rank Stanford in top 3 because in my experience, it is a very tough school to get into. Same thing with Harvey Mudd, Pomona schools which are very hard to get into.
I mean MIT is a top 5 in STEM area but not many would go there to major in History or English Literature. But schools like Stanford, UC Berkeley and Cornell are good in almost every category of majors. Personally, I would not be very impressed if someone went to Harvard to major in Computer Science, but I would be impressed if someone went to UC Berkeley to major in Comp Sci. I am not knocking Harvard’s Comp Sci department, but I never heard any top HS students who wanted to major in Comp Sci say he or she wanted to go to Harvard or Yale for that.
For example, if you wanted to major in International Business, University of South Carolina is ranked number 1 in this major. Yeah, Univ of South Carolina. So my point is it all depends on what you want to major. But I can tell you that there are some schools which are strong to very strong in almost ALL majors, and Cornell and UC Berkeley fit into this category.
Interesting, @websensation , that you assume selectivity is the best measure of a school’s quality, when yield rate is no longer even a factor in USNews’ formula (for good reason).
My nephew attended Yale and majored in CS. He chose Yale to get the benefit of a liberal arts education. He had several great offers upon graduation.
This is the most easily gamed of all college rankings criteria. Get lots of unqualified students to apply, and your selectivity goes way up.
I am assuming this is a proxy for the reputation of colleges among high school students. This is likely to be very circular, as HS student’s views on the best college are likely to be based mostly on last year’s rankings, and even more uninformed than the college dean surveys used in USNWR. It would accelerate the push to early decision, which I think is bad for both students and colleges in finding the best match.
^ Admission rates count for only 1.25% of the US News rankings.
If a college admits many unqualified students, presumably this will show up in lower average test scores.
Average test scores count for over 8% of the rankings.
US News does not use admission yield as a ranking criteria.
I would think circular effects are likely to be strongest in the peer assessment and high school guidance counsellors’ ratings (which count for ~15% and ~7.5% respectively).
In my opinion, the US News rankings aren’t all that easy to “game”, nor have I seen good evidence that it is being done very effectively by very many colleges.USNWR uses many criteria, most of which seem to be mutually corroborating (more or less). So jacking up one measurement wouldn’t necessarily have much effect, unless it was part of an overall effort to improve college quality … but that’s something we want colleges to do, anyway, isn’t it?
@tk21769, I was responding to @websensation above.
@tk21769 theyre very easy to game. schools can increase their average test scores which boosts them a lot. they can also reduce their class sizes a lot. Northeastern U. literally went up from 127 in 2003 to 39 in 2016.
I have to agree with guitar321 on this one. There are many ways universities can, and do, game the rankings. Individually, they do not necessarily make such a big difference, but combined, they can significantly alter the final outcome. Besides, even if the data were accurately and consistently reported and tabulated, the ranking still does not make the necessary adjustments to account for the size and type (publicly vs privately funded) of institution.
I’m not sure those measures amount to “gaming”. They sound more like actual improvements to me. If the rankings are incentivizing some colleges to make actual improvements (e.g. reducing class sizes), isn’t that a good thing?
The Northeastern case does look suspicious. If so, is it evidence of systemic flaws in the rankings, or is it an outlier? Overall, the university rankings have been rather stable over 30+ years (at least for most of the top ~30 schools.)
http://web.archive.org/web/20070905010206/chronicle.com/stats/usnews/index.php?category=Universities&orgs=&sort=1983
@guitar321 IMO if NEU boosts its ranking by making class sizes smaller that’s not gaming, that’s making a change that makes the quality of their education better. So I agree with @tk21769 on that.
NEU also became more popular, lowering the acceptance rate and raising average scores. After the recession I think co-op programs became more popular in general, and NEU also went on a building spree and has a lot of lovely new buildings. It also, I think, during that itme got a break on a change from weight to 4 year grad rate vs 6 year grad rate since it’s a co-op school. If all that makes it rank higher, then rankings are doing their job.
No connection to NEU except that my D seriously considered going there. In part because of the changes that made it rank higher.
Though there’s an underlying assumption underlying that: that smaller class sizes correlate with better education in the postsecondary context.
That may well be true, but it’s an empirical question, really—and that and other such claims about many of the things that feed into the USNWR (and other) rankings are often quite untested. (Not to mention that, e.g., smaller class sizes may be beneficial in certain disciplines, or for certain students, or in certain [insert other contextual variable here], which would make such measures problematic when extended to the institution as a whole.)
What’s a plausible argument that smaller classes are not better (all else being equal)?
Certainly they are more costly, and perhaps the benefits aren’t worth the extra cost … but why might one think they aren’t worthwhile at all?
IMO, the Socratic method is a pretty good approach to humanities and social science education;
discussion of primary source materials is an important part of liberal education;
it is worthwhile to have significant writing assignments reviewed by a mature professor.
I don’t believe these activities are best supported in large classes or (in most cases) under the direction of an inexperienced TA. For these reasons, I accept average class size as a reasonable criterion for measuring undergraduate academic quality.
Typically, the richest, most selective, most prestigious colleges do tend to have the smallest average class sizes. If you don’t think this feature is worth the extra cost, there are cheaper alternatives.
One could say the same about other US News faculty and financial “resource” measurements.
^A lot of STEM majors, where there’s one right answer, don’t lend themselves to classroom discussions, at least at the undergrad level. As long as the students can read the chalkboard or PowerPoint from the back of the room, the larger classroom could still work.
“Overall, the university rankings have been rather stable over 30+ years (at least for most of the top ~30 schools.)”
There have actually been subtle but distinct changes, with public universities initially ranked in the top 10 now no longer ranked among the top 20 and several private universities leaping 10+ spots in the rankings.
“IMO if NEU boosts its ranking by making class sizes smaller that’s not gaming, that’s making a change that makes the quality of their education better.”
Sure, if they indeed have made their classes smaller. But most universities have not made their classes smaller. Before the US News rankings most elite private universities had a 50/50 split between classes with fewer than 20 students and classes with over 50 students. They also mostly had student to faculty ratios in the 10:1-15:1 range. Magically, once those two statistics became part of the ranking, almost overnight, private elites reported 70/30 distribution in their 20-/50+ classes and student to faculty ratios in the 5:1-8:1 range. But were those chances real or simply a manipulation of data? As it turns out, it was a result of the latter. In virtually all cases, universities that claim to have a majority of their classes in the “fewer than 20 students” category have simply flooded their course catalogues with useless seminars and the student to faculty ratios merely omitted graduate students. In other words, they gamed the rankings…big time! And it goes beyond “faculty resources”. Those same universities are likely to manipulate “financial resources” data as well. The US News ranking is a sham.
Furthermore, as dfbdfb aptly points out, the US News methodology makes many assumptions. Do smaller classes at the university level always lead to a better outcome? Do higher alumni donation rates really mean greater alumni satisfaction? etc…
@tk21769, I didn’t say that smaller class sizes aren’t better, I said that the claim that smaller class sizes are necessarily better should be treated as an empirical question, not an assumption.
(Not to mention that a bad teacher with a small class and a great teacher with a large class…Which would be better? I also don’t actually know that one, but actually researching it would be better than assuming one or the other.)
Aren’t there reasonable explanations for these trends that don’t involve “gaming”?
- After big investments in public universities during the height of the cold war, hasn’t that investment gradually flattened/declined? States have reduced their own investments in public flagships. Federal defense RDT&E investments have flattened out. Correct? (I haven’t looked up the numbers recently.)
- After the first rankings, which were based entirely on opinion polls, USNWR added measurements that did not corroborate all those opinions. The PA scores perhaps reflected opinions formed years prior to the assessments (or heavily influenced by continued high levels of graduate research production). In some cases, opinions lagged a changing reality.
One private university in the current top 20 that has leaped ~10 spots is the University of Chicago.
In the first USNWR ranking (1983) it ranked 6th, only a few positions worse than it is now. At one point it did rank as low as 15th. Since then it massively increased its direct mail marketing efforts, resulting in big increases in the number of applications and a large drop in admission rates. At about the same time, it invested heavily in new dorms as well as other facilities (such as the library system). One driving factor was a desire to increase the ratio of undergraduate to graduate students, primarily for financial reasons. It became a more attractive college.
What are some other examples? Columbia? Its first recorded ranking, 18th in 1988, looks like an aberration to me. Almost every other year, it’s in the 9-11 range. Pretty stable. WUSTL maybe?
Is it not the case that today, at the undergraduate level, Chicago, Columbia and WashU are all stronger institutions than the 4 state universities that USNWR ranked in the top 20 in 1983 (Berkeley, Michigan, Illinois, Wisconsin)? Universities are complex institutions, so depending on the criteria one uses, one probably could find evidence for different answers.