<p>from January 12, 2007 Chronicle of Higher Education</p>
<p>New objective ranking of faculty scholarly productivity different from National Research Council</p>
<p>Examines faculty members listed on PhD program web sites.
total 255,475 names.</p>
<p>faculty members are judged on three factors weighted differently depending on discipline:
publications (books, journal articles, citations)
federal grant dollars awarded
honors and awards</p>
<p>Ranks research universities overall and in 104 disciplines. I will post more description of method later. Perhaps most relevant to graduate programs.</p>
<p>for example: Electrical Engineering
Institution Faculty Scholarly Productivity Index, Number of faculty, Percentage of faculty with a book publication, Books per faculty, Percentage of faculty with a journal publication, Journal publications per faculty, Percentage of faculty with journal publication cited by another work, Citations per faculty
1 Cornell U. 1.97 60 - - 90% 11.12 85% 33.28
2 Princeton U. 1.92 31 - - 84% 14.52 77% 87.61
3 Rice U. 1.82 36 - - 81% 8.44 75% 45.08
4 Massachusetts Institute of Technology 1.78 141 - - 87% 8.54 79% 44.90
4 Yale U. 1.78 19 - - 84% 7.47 84% 36.58
6 U. of California at Los Angeles 1.69 52 - - 73% 14.00 69% 53.35
6 Columbia U. 1.69 24 - - 92% 12.46 75% 27.29
8 U. of Washington 1.64 51 - - 88% 9.57 82% 18.10
9 Duke U. 1.61 33 - - 85% 10.15 76% 23.12
10 Pennsylvania State U. at University Park 1.60 51 - - 80% 11.39 76% 40.18</p>
<p>I'm sure that the methodology of this is as flawed as any other ranking, but I think two things about this and similar measures are healthy:</p>
<ol>
<li><p>It encourages reflection on the dominant USNews rankings by mixing things up.</p></li>
<li><p>It concentrates, with detailed data, on a single measure, faculty research, without drawing any more sweeping conclusions.</p></li>
</ol>
<p>It would be neat to have such thorough data across lots of measures: selectivity, student research, endowment, student workload, grad school placement, et al. unlinked from some comprehensive "best" ranking. That would shift the focus, however slightly, off prestige and toward fit. I know that's just a pipe dream; since such lists would only be marketable to hopeless nerds, like those of us on CC :)</p>
<p>number of times a school was ranked first in a discipline
out of 104 disciplines
schools with at least two #1 ranks</p>
<p>Harvard 9
Caltech 6
Vanderbilt 6
UC Berkeley 5
U Washington 5
Columbia 4
Cornell 4
NYU 4
Penn State 4
Princeton 4
Yale 4
U Maryland 3
UC San Francisco 3
Stanford 3
U Wisconsin 3
U Arizona 2
U Illinois 2
Johns Hopkins 2
U Kentucky 2
MIT 2
Michigan State 2
U Pennsylvania 2
Washington U St Louis 2</p>
<p>[collegehelp] Thanks for your reply. I guess I'll have to subscribe to the Chronicle then !</p>
<p>Anyway, I find it curious that Princeton and Berkeley are ranked top 10 (or even top 5) in a fairly large number of disciplines and yet did not make to top 10 overall. How is the overall score computed ? </p>
<p>Also, I guess there are certain disciplines where impact factor (citations per paper) and overall number of publications tend to be by nature intrinsically higher than in other fields. Medicine and biological sciences come to mind as examples. Wouldn't that bias the overall rankings in favor of schools with a large number of faculty in those areas, e.g. universities that have Medical Schools like Harvard, Yale, WUSTL, Duke, etc. ?</p>
<p>Vanderbilt prominence and state u's on this list interesting. Makes sense that Harvard first and that the more research oriented Ivies---Cornell, Princeton, Yale and Columbia (all ahead of Stanford)--high up. Penn on low end and Brown and Dartmouth, predictably, fail to appear. Although UC-Berkeley and U of Washington not unexpected, Maryland, Arizona and, especially, Kentucky's presence surprising.</p>
<p>[redcrimsonblue] As I said before , if you take into consideration the variables used to compute the FSP index, I believe a school like Princeton is at a natural disadvantage given that it its strongest programs are in areas such as humanities (English, History), social sciences (Economics, Political Science, Sociology) and other highly theoretical subjects (e.g. Pure Mathematics and Astrophysics), all of which tend to have fewer publications per faculty and lower impact factors, and also normally receive less federal funding. Conversely, schools that have a large number of faculty members working e.g. in biological/biomedical sciences, experimental physics and/or chemistry are naturally favored by an index like FSP . </p>
<p>Also, it appears that, according to the Chronicle, several universities have raised concerns about the quality of the data gathering methods used by Academic Analytics, especially the fact that the FSP index is based on a list of faculty members obtained from university websites and not verified by all universities. Clearly, there seem to be several anomalies in the rankings as mentioned in this article. Still, the FSP ranking looks credible in many disciplines and may prove a valuable tool for students who plan to go to graduate school in the future and value academic research productivity above any other quality measure.</p>
<p>I think the biggest problem is that subjective factors about which research is important are given no weight whatsoever. On the other hand this is a useful corrective to general reputational/peer surveys which change slowly <em>even</em> when faculties change completely.</p>
<p>I have always suggested the following procedure. Provide a group of top scholars with faculty lists in their fields (no college names given) and ask them to sort them from top to bottom according to their preferences. This peer rating would also be subjective BUT it would at least be based on a professor's view of which professors actually were in which departments. It would not be subject to the vapid -- Which university is better than the other. And I bet a person could do the sorting quite rapidly -- a couple of hours in the afternoon. That would better tell you what the top people think of various faculty.</p>
<p>Otherwise schools which have transformed their departments -- such as NYU or Duke may be unfairly hurt or helped by previous views about their reputation.</p>
<p>They used the faculty listings on the department's website. This methodology may be flawed for a number of schools. Do they consider professor emerti? What if a website lists retired professors under the faculty list? What if they are on a separate list? Many retired professors no longer publish, which could lead to errors at particular institutions.</p>
<p>ophiolite-
I checked the article. They get an initial list of faculty from the web and send that to the university for verification. However, half of the schools didn't respond to the request for verification.</p>
<p>rankings for individual departments are normalized; the reported scores are merely the number of standard deviations above or below the mean within an idividual field of study. as such, overall scores are not skewed in the regard you imply (a good english program will have a similar score to a good biochemistry program). that said, there tend to be many more phd options (subfields) in the sciences, which tends to result in the sciences holding extra weight when some sort of summation of phd program rankings is done.</p>