Ph.D. Productivity as a Proxy for Academic Quality

<p>

</p>

<p>I didn’t say high PhD production rate = “joke PhDs.” I was merely stating the well-known fact that there is a huge oversupply of PhDs relative to demand for them in academia (which has always been the largest source of employment for PhDs) and in the private sector (which absorbs many, especially in STEM fields, but often in jobs for which a PhD is unnecessary or at which the pay barely exceeds that commanded by the holder of a BS degree).</p>

<p>Here are a couple of recent articles in a special issue of the highly respected journal Nature on the oversupply of PhDs in STEM fields:</p>

<p>[Home</a> : Nature News](<a href=“http://www.nature.com/news/2011/110420/full/472276a.html]Home”>Education: The PhD factory | Nature)</p>

<p>[Home</a> : Nature News](<a href=“http://www.nature.com/news/2011/110420/full/472261a.html]Home”>Reform the PhD system or close it down | Nature)</p>

<p>The situation is much worse in the humanities and social sciences, where despite a well-documented, chronic, 40-year oversupply of newly minted PhDs relative to static or shrinking demand, universities continue to crank out large numbers of PhDs with only the slimmest of chances of landing full-time, tenure-track academic jobs. Most will end up as low-wage adjuncts, driving taxis, delivering pizzas, or selling real estate. Here’s what one researcher found:</p>

<p>

</p>

<p>So until I see evidence that the graduates of any college are landing in the top handful of PhD programs, I’m not going to be impressed by that college’s PhD productivity rate. A sizable fraction of the people going into PhD programs these days are just too obtuse to read the signals from the labor market, telling them they’re heading down a dead-end path.</p>

<p>

</p>

<p>True. But then, why doesn’t a single Ivy make the top 10? Why not Dartmouth? Why not Harvard, which isn’t much bigger than Chicago (and, like the others, does not have undergraduate business, nursing, or very robust engineering programs.) </p>

<p>Moreover, we have this study:
[Privileging</a> History: Trends in the Undergraduate Origins](<a href=“Perspectives on History | AHA”>Privileging History: Trends in the Undergraduate Origins of History PhDs | Perspectives on History | AHA)
The “Select 25” table does not show PhDs per capita for the entire school population. It shows per capita history PhD productivity only for history majors at these schools. So there is no issue of nursing or engineering majors diluting the pool. Yet still, LACs represent 7 of the top 10. Half the schools from the OP’s list show up again in the top 10 of this one (Chicago, Bryn Mawr, Swarthmore, Reed, and Oberlin.) </p>

<p>If you still think these results are just a statistical distortion, look at other outcome-focused metrics. Consider the current CC thread on mean LSAT scores (<a href=“http://talk.collegeconfidential.com/college-search-selection/1190895-mean-lsat-score-undergraduate-college.html[/url]”>http://talk.collegeconfidential.com/college-search-selection/1190895-mean-lsat-score-undergraduate-college.html&lt;/a&gt;). Small LACs are disproportionately represented at the head of that pack, too. In many cases, the schools with the highest LSAT scores aren’t surprising, because they are also the schools with the highest SAT scores. But look at the set of schools that make the top 25 by LSAT score … even though they don’t make the top 25 by SATs: Hamilton, Carleton, Wheaton, Wesleyan, W&L, Reed, Haverford, and Georgetown. All but one of these are LACs. </p>

<p>What we are debating here comes down, largely, to selection effects vs. *treatment effects.<a href=“%5Burl=http://cobb.typepad.com/cobb/2009/08/treatment-effects-vs-selection-effects.html]Treatment%20Effects%20vs%20Selection%20Effects%20-%20Cobb[/url]”>/i</a>. Teasing out one from the other is hard to do in analyzing college admissions and post-graduate outcomes. I believe PhD productivity (and perhaps a few other outcome-oriented metrics) do reflect positive treatment effects from the academic quality and intellectual atmosphere of the top-ranked schools. Small, discussion-based classes (and other features of these schools) seem to make a difference.</p>

<br>

<br>

<p>Maybe because the PhD is a very narrow measure of “academic quality.” I bet the Ivy League schools would move up if you looked at productivity of ALL doctoral degrees combined: PhD, MD, JD, DDS, DVM, DD, LLD, etc. and not just PhD. All doctoral degrees require advanced study. And schools of higher quality probably do a better job of preparing their students for success in those programs than schools of lower quality. So why not include those?</p>

<p>This whole focus solely on PhD productivity appears to be largely driven by Reed College and its boosters. Reed touts it on its website. Which makes sense given its history of quarrels with USNews and relatively low USNews ranking. If you can’t win the big game, think of a smaller and more specialized game you can win and pour all your public relations efforts into that. If you slice the baloney thin enough pretty much any school can get itself ranked in the top ten of some poll or other.</p>

<p>

coureur, what makes you think that USNWR is a better measure of academic quality than PhD productivity?</p>

<p>USNWR is a ranking by a for-profit organization weighted heavily toward frivolous, subjective measures such as “peer assessment” and “counselor rankings”. The objective measures it uses in its methodology could not be more loosely related with academic quality. We have (1) “freshman graduation/retention rate”, (2) “faculty resources”, (3) “student selectivity”, (4) “financial resources”, (5) “graduation rate performance”, and (6) “alumni giving”.</p>

<p>Only (1) and (5) are directly related with academic quality and they are not given much weight. (2), (4), and (6) may be secondary measures of academic quality. (3) is wholly unrelated with academic quality. If you add the objective, “relevant” criteria of their methodology together, only 27.5% of the USNWR ranking is an objective measure of academic quality! Very little of it actually measures the results of a university education. But USNWR made sure to heavily weigh lots of subjective opinion into their ranking to make it look right. </p>

<p>Now let’s compare this to the PhD productivity study. This was conducted by the National Science Foundation, not only a non-profit organization but a respected government institution. The PhD productivity study is 100% objective and 0% subjective. Furthermore, PhD productivity is clearly a better measure of student accomplishment that the “graduation rate”. To add to that, you yourself even admitted that a measure of doctoral productivity would be a good measure of academic quality, which PhDs are a component!</p>

<p>Simply the PhD productivity study is not the complete study you would wish for coureur; but it is a big component of such a study that you are brushing off completely. And you’re brushing it off in favor of what? A big media voodoo opinion survey. You can argue that the overall doctoral productivity study would yield vastly different results than merely the PhD productivity study; but I believe that may be more wistful thinking on your part than reality. PhDs, J.D.s, and MDs are not that different from each other and PhDs may cover an array of topics that JDs or MDs cover as well</p>

<p>The reality of the situation is that HYP are not as good universities as many students and professor are lead to believe and when there is hard data refuting the fact, the conservative minds are all too quick to come up with exaggerate flaws of the data. You can think what you want; but, regardless of the validity of the claim, Reed did not lose the bigger battle. USNWR is not a study of “academic quality” but merely a survey of common perception.</p>

<p>The idea that getting a foolish degree measures anything but wishful thinking is hilarious. 90% of students have NO interest in such a wasteful undertaking. They would like a job before they hit 30. And a real full-time job. Not following around $3000 adjunct work.</p>

<p>

</p>

<p>So what are you saying? That Kalamazoo College (11 history Ph.D.s per 100 history graduates) and Earlham (10/100) are better than Harvard (9/100)? That Lawrence University (9/100) is as good as Yale (same)? That Kalamazoo, Earlham, and Lawrence are all better than Princeton (7/100) and Columbia (8/100)?</p>

<p>Get real.</p>

<p>I happen to think fairly highly of Kalamazoo, Earlham, and Lawrence. I think they’re all deeply committed to academic values, and they all do a good job with limited resources. But these are not academic powerhouses. And for schools like that, in particular, I’d want to know which PhD programs their graduates are getting into and completing before I’ll give them credit for a high percentage of PhDs. Maybe they are getting their grads into top programs. Or maybe it’s just bad academic and career advising.</p>

<p>

</p>

<p>You don’t think the other students make an important part of the learning experience? I feel that having better students definitely would increase the level of the class discussions or group projects. Unfortunately, there is no good way to measure this, but acceptance rate and the like seems to be the best proxy.</p>

<p>

Booster here. This is of interest to very few HS kids (the thin slices :slight_smile: ), and Reed indeed tries to find those few by promoting its best features. The result is barely 3,000 apps per year. This issue is just not relevant to most kids, and it’s a measure of quality to few.</p>

<p>

</p>

<p>Meh. I’m not impressed by mean LSAT scores, either. The top-ranked school by mean LSAT scores is Harvard, at 166, followed by Yale, Swarthmore, Princeton, and Pomona, all at 165. But a 165 or 166 LSAT won’t get you into Yale Law School (25th-75th percentile LSAT = 171-176) or even Harvard Law (same). Those scores won’t get you into any top 10 law school except maybe UC Berkeley (162-170), if you’re a California resident. The mean LSAT scores at Carleton (162) might get you into law school at Minnesota, Indiana, or UC Davis. The mean LSAT score at Bryn Mawr (158) might get you into law school at Penn State, Chicago-Kent, or Seton Hall, but probably not Temple (159-164, 38.7% admit rate).</p>

<p>As I said, I’m not impressed. Differences in mean LSAT scores at various schools probably have a lot to do with selection bias, i.e., who decides to take the test.</p>

<p>

And there’s no way such small schools can ever be powerhouses. The big school boosters should relax a bit! The small schools really aren’t a threat; hardly any students attend them.

No matter what the undergrad or grad school is! ;)</p>

<p>

</p>

<p>Of course not. I think you’re trying to attribute to me a grand pronouncement, and I’m not making one.</p>

<p>^^Sure you are. You made your big pronouncement back in post #15:</p>

<p>“So in my opinion, PhD productivity is possibly the best outcome-oriented measure we have for academic quality.”</p>

<br>

<br>

<p>I said no such thing. I said that Ivy League schools would do better in this measure if all doctoral degrees were considered instead of just the PhD. Focusing on one very narrow measure of “quality” instead of a wider ones is an example cherry-picking the data to ensure the desired outcome. I don’t think looking at all doctoral productivity would be a very good measure of quality either, only that it would be better than seizing on one particular doctoral degree. </p>

<p>PhD productivity is like looking at which schools produce the most Rhodes scholars or the most Nobel prize winners (which some school boosters also do on CC from time to time). I’ve even seen a few threads where people have tied the academic quality of a college to whether one of its students wins the Jeopardy College Tournament. PhD productivity, Rhodes, Nobels, and Jeopardy - all four of those are nice and interesting bits of data, but they have almost zero relevance to the vast majority of kids going to the colleges in question.</p>

<p>And the perpetual use of per capita analyses of PhD production compounds the error by statisitically stacking the deck in favor of very small schools. So what we are left with is a measurement of quality so narow as to be almost meaningless, that is then analyzed in such a way to favor certain schools. And from that we are supposed to conclude that it is “possibly the best outcome-oriented measure we have for academic quality.” Give me a break.</p>

<br>

<br>

<p>Hey, I’m not a big fan of USNews rankings either. I’ve questioned using them to measure quality too. But I’ll give them credit for at least trying to look at a multiplicity of factors rather than just one. But for better or worse USNews ranking is the most famous measure of “quality” today. They are the biggest game in town when it comes to rankings.</p>

<p>In a way I don’t blame Reed. They are merely playing the cards they’ve got. And you can bet your bottom dollar that if the situation were the other way around - that Reed had low PhD productivity but a high USNews ranking, they’d be bragging about USNews on their website instead.</p>

<p>Come on, we all know that New Mexico Institute of Mining and Technology (8.7) has higher academic quality than Yale (8.4), Williams (8.4), Stanford (8.1), JHU (7.7), Cornell (7.6), Brown (7.4), CMU (7.1), Amherst (6.8), and Duke (6.8). :rolleyes:</p>

<p>I wonder why Wesleyan comes in at #26 (7.1) when it spends more on research than any other LAC and more than 6x as much as Reed does. Could it be that Wesleyan is much larger than the LACs that do better in this study (almost 2x the size of Reed)? Perhaps coureur is really onto something. ;)</p>

<p>

</p>

<p>I don’t have an official source for this, but I think Reed’s ranking is so low because it actively REFUSES to send in data to US News, but it gets ranked anyways. Reed is pretty selective, so I think that’s the only reason it’s not ranked higher. </p>

<p>But even if Reed does produce a lot of PhDs, most students there will never get one, nor will they need one. It’s definitely not a statistic that matters to most people.</p>

<p>speaking of cherry-picking…the source documents includes engineering PhDs, so the real top 30 list looks like this:</p>

<p>Caltech
Harvey Mudd
MIT
Reed
Swarthmore
Carleton
Chicago
Grinnell
Rice
Princeton
Harvard
BrynMawr
Haverford
Pomona
New Mexico Mining
Williams
Yale
Oberlin
Stanford
Johns Hopkins
Kalamazoo
Cornell
Case Western
Washington College
Brown
Wesleyan
Carnegie Mellon
Macalester
Amherst
Duke</p>

<p>[nsf.gov</a> - NCSES Baccalaureate Origins of S&E Doctorate Recipients - US National Science Foundation (NSF)](<a href=“http://www.nsf.gov/statistics/infbrief/nsf08311/]nsf.gov”>http://www.nsf.gov/statistics/infbrief/nsf08311/)</p>

<p>

</p>

<p>Clearly USNWR isn’t measuring the real acdemic quality of the Ivies, etc, like you said. Why would you continue to support USNWR’s claim that they’re so great anyways? If you really believe PhD productivity is a good measure of academic quality, why not just go ahead and say that Earlham is perhaps better than Columbia or Duke? </p>

<p>On top of that, I guess Case-Western is better than Brown too. Why not? Besides other flawed rankings, there’s not much to tell us otherwise.</p>

<p>

</p>

<p>Sorry to be so gauche as to quote myself here, but I really want to underscore this point. A young friend of our family who is attending a middle-ranked LAC—let’s just say in the US News 160-175 range—is being actively encouraged by his professors and academic advisers to go grad school in the social science field that he’s majoring in. He’s got great grades and will have great recommendations and acceptable GRE scores, and probably could get into a second- or third-tier Ph.D. program. I don’t think he has a snowball’s chance of getting into one of the top 5 or 10 programs in his field—the programs out of which newly minted Ph.D.s still have a fighting chance of landing attractive jobs. So I really have to question whether he’s getting sound career advice, or whether instead he’s being encouraged to pursue an arduous dead-end path. I think this is still all too common.</p>

<p>At some of the better colleges, professors and academic and career advisers are encouraging undergrads to think long and hard before deciding to pursue a Ph.D., especially in the humanities and social sciences but increasingly in the hard sciences as well, given the state of the academic job market. Some decide to pursue it anyway, whether out of a genuine passion for the discipline, or in a misguided quest for the prestige of the degree, or in denial of market realities, or because it’s the path of least resistance for a good student who has few real marketable skills and no better idea what to do with her life, or because of a gambler’s mentality that says notwithstanding the odds I might be the one who gets lucky and manages to make a career of it in my chosen field—or for some combination of those reasons. Some get into top programs. A tiny handful do manage to land good but exceedingly scarce tenure-track academic jobs.</p>

<p>But for second- and third-tier colleges to actively encourage their undergrads to pursue PhDs as if this were 1963 and the market for academics looked rosy what with the baby boomers about to come of college age and colleges and universities expanding and the nation fully committed to an academic as well as a material arms race with the Soviet Union—well, that just strikes me as deeply out of touch, and deeply irresponsible. And I can’t help but feel that this attempt to link PhD productivity to academic quality is similarly out of touch. A PhD in political science from Harvard, Princeton, Stanford, Michigan, or Yale is just not the same thing as a PhD in political science from Wayne State or West Virginia. The former may still be a valuable asset; the latter likely not much more than a nice piece of paper. A measure of PhD productivity that doesn’t make those distinctions is worthless, or maybe downright pernicious if it encourages colleges to push more of their students to pursue dead-end PhDs so the undergraduate institution can climb the prestige ladder at its graduates’ expense.</p>

<p>Just pointing this out if someone hasn’t before - but of the PhDs studied, there were 28,280 (about 10%) whose undergraduate school was unknown. That could skew the rankings significantly. In the end I don’t think the study indicates much anyway, but still, not knowing the makeup of nearly 30k PhDs introduces a lot of possibility for error (e.g. a majority of the PhDs that a given school produces could fall under that 30k - we really have no idea).</p>

<p>

I don’t think so. Reed was in USNWR’s top tier of LACs immediately before they decided such general purpose ranking is bad, and stopped participating. Nothing about Reed changed when their ranking plummeted. They asked USNWR to simply remove their listing from the charts, to no avail.

Then there’s little or no correlation between PhD productivity and research money spent. I think Reed is high on the list because they require a research thesis of every graduating senior. I think any school that adopted this policy would see their PhD productivity rise.</p>

<p>Sigh.</p>

<p>I think PhD-productivity rankings are a useful tool for finding schools where students are likely to be interested in research and scholarship. For people who like learning for learning’s sake and plan on going to grad school, that is a measure of academic quality. For people who don’t want to spend their lives interacting with hungover undergraduates and writing articles nobody wants to read, the PhD statistic is, if not meaningless, then significantly less relevant.</p>

<p>I do think a high PhD-production rate is likely to correlate with some contributing factor to a good undergraduate education–a student body that is, on the whole, more intellectually curious than the student bodies of some less academically oriented schools, or livelier, more involved class discussions–but who knows. If such a correlation exists, then it’s an indirect one.</p>

<p>What I’m trying to say is that this statistic is meaningful, but not definitive. It is important if you’re serious about getting a PhD in the future, and unimportant if you’re not. Likewise, if you see going to college solely as a way to maximize your earning potential, then finding information about the average salaries of college grads will be important to you. If you don’t care whether your starting salary is $45,000 or $55,000 as long as you get to work as a, say, marine zoologist, then choosing a school based on its alumni’s average starting salary makes no sense.</p>

<p>If you want to become a petroleum engineer, then going to the New Mexico School of Mining would indeed benefit you more than going to Yale. Does that mean the former “has higher academic quality” than the latter? For one specific, narrow definition of ‘academic quality,’ yes. In most other cases, no.</p>

<p>People seem to be ignoring the vast expanse of meaning between “PhD productivity is the only measure of academic quality that matters” and “PhD productivity is absolutely meaningless.” It’s not an either-or situation, people. PhD productivity can be a meaningful statistic without threatening the reputations of the many great schools that don’t seem to produce lots of future PhDs. “Oh no, Harvard isn’t 1st on this one list, does that mean someone is trying to tell me Earlham is a better school than HARVARD??? NO SCHOOL IS BETTER THAN HARVARD flameflameflameflameflame” is not a rational statement. Of course Harvard is a great school for (almost) every type of person and can provide a budding historian with all the support, instruction and encouragement she needs. It just so happens that out of every 100 history majors at Earlham, 10 will go on to become history professors (or unemployed history PhDs), as opposed to 9 from Harvard (which is a laughably small difference anyway). That is all. Maybe Harvard’s history concentrators are more likely to end up in law school instead, or maybe Earlham happens to have this one really inspirational history teacher on its faculty, or maybe Earlham is just so damn small that subtracting one future history professor from its total would have changed its history-professors-per-capita number to 4/100. Big deal.</p>

<p>My point is: Statistics are useful if you know how to interpret them, can do it critically, and are actually interested in what they’re telling you. Which is why ranking schools based on some arbitrary criteria I care nothing about, like alumni giving rates and high-school counselors’ opinions, means nothing to me. Now, if I were highly interested in finding a school that would be guaranteed to impress guidance counselors across the US…</p>

<p>I’m not saying these rankings are useless; I’m saying they’s irrelevant to me, the same way PhD productivity can be irrelevant to someone else.</p>

<p>So people, chill. Nobody is suggesting that Kalamazoo is cooler than Harvard, like, on a cosmic scale. There are no absolutes, no definitive way to compare schools and rank them in order of absolute academic quality. There are only ways to isolate specific functions and byproducts of academic quality, and measuring PhD productivity can be one of them, if you associate academic quality with intellectual curiosity and academic zeal (or whatever qualities you think college professors possess). How important PhD productivity ends up being depends on how interested you are in getting a PhD.</p>