Grad school ranking as a function of Undergrad school ranking?

Suppose we boil the entire graduate selection process down to two components:

  1. The national rank of the undergraduate program that the student attended
  2. all other criteria

I know there are a wide variety of factors which graduate programs use to evaluate students, but lets just focus on the national ranking / everything else dichotomy.

In your opinion, does the rank of the graduate program which the student is able to attend:
a) correlate more closely to the rank of the undergraduate program
b) correlate more closely to the sum of all other factors

I’m thinking specifically about mathematics PhD programs here, but answers with respect to other fields are definitely welcome.

The rank of the graduate program correlates more closely to the sum of all other factors.

Generally speaking, grad schools don’t really care where you went to undergrad so long as it’s a “real” university (i.e. not a for profit school) and you have a good GPA, good GRE, as much research experience as possible (probably THE single most crucial aspect of grad school applications, especially for a Ph.D.), and solid letters of recommendation.

I know people from very small, less-known universities/colleges who got into excellent graduate programs. I know people from excellent undergrad universities who got completely shut out in the graduate school admissions process.

I would venture that there’s zero correlation between rankings.

More importantly, and one of the many, many flaws when it comes to rankings, for most graduate degrees, with the possible exception of MBA, institutional reputation means nothing. What does is the stature of the professor you’ll be working with. If the best professor in the sub-field that you want to concentrate on is at Podunk U, you’ll be best off there.

Prof from ten graduate schools are accounted for the majority of the top 50 Computer science universities. Majority of those who went to the ten graduate schools are graduate of ten undergraduate institutions. Just a data point.

I’m afraid that I must emphatically disagree. If anything, the exact opposite is true: for most graduate degrees, institutional reputation is indeed the primary criterion. That includes not only the aforementioned MBA (for which institutional reputation is certainly dominant) but also master’s degrees generally (especially professional master’s such as MPA’s, M.Ed’s., MPH’s, M.Arch.'s, etc.) , law degrees, and the like. The importance of working with a high-status professor is important only for PhD programs, yet PhD’s comprise only a minority of all graduate degrees. For example, law school students seldom if ever work with a particular professor.

Even regarding PhD degrees, the status of your advisor may outweigh the general reputation of your institution only for those particular students who are sure they will enter academia. Yet the fact is, many Phd students won’t. A sizable fraction of them won’t even complete the PhD at all, either because they fail their qualifying exams, or because they simply can’t produce a dissertation that is worthy of a Phd. {This may be especially true of the math PhD program as per the inquiry of the OP: there is simply no guarantee that any particular math Phd student will make a mathematical discovery worthy of a PhD.} Those students still must move on with their lives, for which a consolation master’s - or even the status of a former PhD student - from a high-status institution can be a valuable asset.

Moreover, even many students who do complete the PhD will not enter academia anyway. Some of them never planned to do so in the first place: many incoming economics PhD students just want to work at a central bank or a think tank; many finance/accounting PhD students just want to join a hedge fund (or perhaps start their own), many computer science PhD students just want to develop their own technology with which to start their own company. Others find that they can’t garner an academic offer at all, or at least one that pays them sufficiently. For example, while many incoming PhD students might have initially been willing to endure the years of low-paid postdocs commonly required in many fields prior to securing a faculty position, the PhD student years are also a common time to get married and have children…and once you start having children, your financial demands expand dramatically. Many such students therefore rationally determine that they now need to enter industry to support their family.

Also it bears mention that there’s rarely if ever any guarantee that you’ll get your preferred advisor anyway. Let’s say that you do turn down a high-status institution to join Podunk U because it has the Star Prof in your chosen subfield…yet find out once you’re there that you can’t get Star Prof to become your advisor. Maybe Star Prof already has sufficient students and isn’t taking more, maybe Star Prof simply doesn’t like you personally and therefore decides not to take you, maybe Star Prof retires, maybe Star Prof jumps to another institution or even leaves academia entirely (for example, Joel Podolny left academia to work for Apple), maybe Star Prof dies (which has actually happened to people that I know). Now you’re stuck at Podunk U.

The difficulty of such an analysis is that (a) and (b) are themselves correlated: students at the top-ranked undergraduate programs are more likely to score highly in the factors comprising (b), relative to students at the low-ranked programs. That is, after all, why they were admitted to the top-ranked undergrad programs in the first place.

I suspect that the question you actually mean to ask is what is the impact of the rank of the undergraduate program upon rank of the graduate program while controlling for the other factors and vice versa. In other words, this is effectively a comparison of the magnitude of 2 regression coefficients. In that case, I would concur that the regression coefficient of the ‘other factors’ is likely to be larger than the coefficient for ‘undergrad program rank’. But the latter would still likely be a positive value.

The issue with your question OP, is that there really aren’t any valid measures of “undergraduate rank” and in particular undergrad departmental rank. Moreover, how do you separate a top-ranked grad Uni (in math) from its undergrad major? The usual suspects rank high in both.

Does Harvard have a high-ranked undergrad math program bcos its teaching is just that good, or bcos its Grad program is top 10?

The answer to your question really, is the same as what every high schooler asks: its it better to take an AP course, and perphaps earn a B, or a college prep course, and earn an A. The answer is to take the AP course and earn an A. In other words, both a & b in your query.

http://www.chronicle.com/article/NRC-Rankings-Overview-/124743/

http://www.collegefactual.com/majors/mathematics-and-statistics/mathematics/rankings/top-ranked/

While perhaps true, this borders on a straw man argument since the 'best professor in a sub-field" is highly unlikely to be found a Podunk U. The best professor in a sub-field is gonna naturally gravitate to a higher-ranked college which just has more resources to support said-best professor’s research. In other words, Best Professor might not be at Podunk U very long.

Me too.

Back when I wasting time awaiting my D’s choice of grad programs, I surveyed where the faculty at the “top Unis” in her field obtained their doctorate, and the findings, (which I never shared with her,) were not surprising.

Harvard
Stanford
Michigan
Yale
UCLA
Cal
Penn
MIT
Princeton
Columbia
Cornell

In other words, the top programs in her field, hire from Harvard first and foremost, with Stanford a close second (in this unscientific survey of what I could find on websites).

So yeah, prestige matters, particularly in academia.

@peterquill, I’m afraid my bias showed from my personal background caused me to group most graduate students into those seeking PhD with an eye on a research oriented career, whether it be in academia or not. Certainly, that is not the only case. I stand corrected.

@bluebayou, that depends completely on the field. Certainly, if it was for engineering, the list would be different, and it would be different depending on which field of engineering you were looking at. It might include several programs not viewed by the average CCer as prestigious, like A&M, Wisconsin and Illinois, to name but a few.

There will no doubt be lots of top professors in their fields at top programs in said fields. That’s how they become known as top programs. That said, if you have a particular interest at the doctoral level, especially in a STEM field, who you work with is more important than where you work. My Podunk U comment was a rhetorical attempt at making that point. The main point being, prestige is based on the hierarchy of each different field and not a simple USNWR or HYPS type of a formula.

I think that if a school has a high grad school ranking in a major/program, we can probably say that the undergrad version is at least decent.

But if a grad program is not highly ranked, we cannot say that the undergrad version sucks.

Some schools simply are more invested in undergrad program quality than grad, and vice-versa… so absolutely, no, there is not a 1:1 relationship between undergrad and grad program rankings. It’s hazy at best.

I couldn’t understand why folks were discussing a college’s graduate reputation vs. its UG reputation, so I went back and read the OP’s question. He or she starts out asking about the “graduate selection process” and whether the student’'s UG college outweighed the sum of “all other criteria” which I assume means grades, research or work experience, recommendations, GRE/LSAT/MCAT scores etc. Some people responded to this question.

There have been a lot of recent threads on this topic on this forum. The consensus seems to be that the “other criteria” significantly outweigh where a student wen to college. This factor seems at best a tiebreaker, although perhaps given more weight for graduate business school or law school.

The OP then asks a basically unrelated question: How does a college’s graduate program reputation correlate to the rank of the college’s UG program? Some people responded to this question.

To me the responses concerning question #2 are on the money. There should generally be a strong correlation between the quality of a graduate program to a university’s UG program. But this goes into the little-knowledge-is-dangerous category. Concerning the OP’s interest in a math PhD, he/she should focus entirely on the quality of the graduate program, and more specifically on the specific area he/she wants to research. USNews ranks seven different graduate mathematics specialties. Harvard is number one in Algebra and Geometry, but is unranked in Applied Math, Discrete Math, and Logic. NYU is number one in Applied Math. UC San Diego is number three in Discrete Math but unranked in any of the other six specialties and 23rd overall.

Michigan State is the 82nd-ranked college, and ranked 29th overall in graduate physics, but is ranked number one in nuclear physics. (MSU hosts the National Superconducting Cyclotron Laboratory and is building the Facility for Rare Isotope Beams.) A student interested in nuclear physics that looked only at MSU’s UG ranking would never even apply to MSU. And MSU had the number-one graduate program in six other fields: African history, elementary education, industrial and organizational psychology, rehabilitation counseling, secondary education, and supply chain/logistics management.

Maybe I’m confused, but it looks like people who responded to OP’s first question are arguing with people who responded to OP’s second question.

While I tried to answer the OP’s question as stated, I must admit that I’m uncomfortable with the way it is posed. The essential epistemological problem is that those ‘other criteria’ are themselves a function of the undergrad college. Better undergrad programs tend to offer better research opportunities, more challenging coursework, a more driven student body which then impacts your drive (let’s be honest: if others around you are unmotivated and uninspired, then you too will tend to become unmotivated and uninspired), and the like. Hence, I’m not sure that it truly makes sense to ‘control’ for those other criteria to examine impact of the undergrad program alone.

^Yes, I agree with the above. The OP’s original question is, in and of itself, a false dichotomy.

I feel like undergraduates or high school seniors often pose this question in a variety of different ways when what they’re really trying to ask is whether, and how much, their chosen undergraduate college will affect their graduate school admissions. The answer is - not really, maybe a bit, and if it does mostly indirectly. It’s mostly as @peterquill says - at some better colleges you may have more access to research opportunities, special opportunities, more challenging coursework, peers with similar goals, professors with more experience getting students into top grad programs, etc. But do note that the span of programs that can provide those things is a really, really wide span - you can get them at a place like Harvard or Yale or Amherst or Swarthmore but also at a place like Cal State Northridge or Loyola Marymount or CUNY Brooklyn.

I also agree with @bluebayou:

Plus I think this is setting up a false dichotomy, too. A lot of students believe that if they go to Harvard they’ll get lower grades and shine less in the student body but they’ll have the Harvard name, whereas if they go to let’s say Ohio State they’ll have better grades and will be near the top of their class and a superstar. That’s not necessarily the case for a whole host of reasons, not the least of which is that elite institutions engage in some pretty serious grade inflation anyway.

Actually, I’m not sure that that’s a fair characterization of what I said - at least, not yet. While I suspect (and indeed explicitly proposed in post #11) that the overall effect of one’s chosen undergrad college upon grad school admissions may be mostly indirect, to then suggest that the overall effect is therefore small (e.g. “not really”, “maybe a bit”) is taking things too far. The overall effect might very well be quite large (if still mostly indirect).

An analogy might be the fallacious argument that daily exercise is not really beneficial to your overall health conditioned on maintaining a reasonable body weight (that is, somebody who never exercises but remains thin anyway doesn’t really need to exercise.) The logical problem with that argument is that exercise helps you to maintain a reasonable bodyweight. Bodyweight is therefore the key mechanism by which exercise benefits your health. It therefore makes little sense to ‘condition’ upon bodyweight when measuring the overall effect of exercise upon health. Exercise therefore does indeed provide a strong overall health benefit through the indirect channel of your bodyweight.

Nevertheless, I also freely admit that I don’t really know what the magnitude of the overall effect of one’s undergrad program is upon grad school admissions. Nor do I think anybody else knows with much precision. Indeed, this is one of the great unanswered empirical questions of our time. Perhaps one might attempt to answer this question through what has become known as an ‘audit study’ methodology: essentially, sending out a stack of fictitious grad-school applications with similarly matched credentials except for the prestige of the undergrad program - i.e. same GPA, GRE, same glowing recommendations, same research experience, but some fictitious candidates ostensibly come from HYSPM vs. others from NoNameU - to measure the difference in acceptances and/or invitations for interviews.

{Note, lest anybody questions the ethics of such an experiment, it bears mention that that’s precisely the research method used by Andras Tilcsik in examining the differential effects of being openly LGBT and by Tilcsik and Lauren Rivera in examining the interactive effect of gender and social class signals upon job interview invitations. They sent out a stack of fictitious resumes with similar credentials. In the former study, some of the fictitious resumes listed leadership in a college LGBT organization vs. a control group of resumes listing leadership in a small left-wing political organization. In the latter study, some of the fictitious resumes included snippets that are indicative of high social class, such as membership in the college sailing team or personal interest in polo and classical music whereas a control group of resumes indicate membership on the college track team and personal interest in soccer and country music. I am not aware of anybody accusing Tilcsik and Rivera of behaving unethically. If they are allowed to send out fictitious job applications, I don’t see why other researchers shouldn’t be allowed to send out fictitious grad-school applications. Heck, maybe I should go run this audit study}

http://www-2.rotman.utoronto.ca/facbios/file/tilcsikajs.pdf
http://www-2.rotman.utoronto.ca/facbios/file/Class%20Advantage%20Commitment%20Penalty.pdf

My epistemological question would then be: what if somebody ran such an audit study and discovered that the effect of the name of the undergrad program upon grad school admissions is actually quite large? Would we be willing to accept that result?

Which is the prototypical and perennial adcom non-answer. The question of whether a B in an AP course is better than an A in a college prep course is a perfectly fair and legitimate question that deserves a legitimate answer rather than a flippant non-answer. Indeed, frankly, it seems to me that a major reason why a lot of kids don’t trust adcoms is because they pointedly persist in responding with non-answers to legitimate questions.

Perhaps an audit study would actually provide a legitimate answer to this question? Hmmm.

I disagree that it is a non-answer or flippant. The point, at least for highly selective colleges, is that your app will be weighed against the others in the app pool, and hundreds, if not thousands, of apps which have numerous AP’s who have earned an A and a 5. So, that is the competition.

Sure, the Adcom could provide some truth telling, and say, hey, unless you are hooked, you better take the AP and the vast majority of unhooked that we accept will have an A/5, but, obviously, that would be bad for bidness, particularly since there are no hard and fast rules. The transcript is just 2 elements (out of 6+) in the admission decision. Thus, a AP-B/3 (or CP-A) are admitted to an Ivy, are every year. The odds are just a whole lot less absent some other compelling items in the folder.

For what it’s worth, when I was a Master’s degree student at Stanford, the other students in the same program were from universities all over the place. I recall students from multiple state universities, one from the UK, a few from Latin America (I played squash with one of them and was, well, squashed), and a few from Asia. I don’t recall there being any two students from the same undergraduate school (except maybe Rutgers), but I of course did not check on all of the students in the program (I didn’t know all of them). I am pretty sure that there was at least one student from UNC, one or two from Rutgers, one from Michigan (my girlfriend, I am sure about this one), and one (me) that had done undergrad at MIT. I don’t recall there being anyone who had done undergrad at any Ivy League school.

The students had pretty much two things in common: They were smart and serious students, and then had done very well as undergrads where ever they had been.

To me this supports the claim that the national rank of the program that the student attended doesn’t matter much, although the ones who had done undergrad in the US were probably all from top 200 schools.

I’m afraid that I must disagree with your disagreement: it is indeed a flippant non-answer. While I’m well aware of your point that the competition pool is strong at the top schools, that doesn’t take away from the point that if a question asks whether option (A) or (B) is better, it is a flippant non-answer to insist upon choice (C).

To give you an analogy: Let’s say that I have the following math question on an exam:

Which of the following numbers is composite: (A) 611 or (B) 613

If I were to respond with (C) 612 (which is indeed composite), I would have that question marked wrong, because my response is actually a non-answer, for it is not a member of the available choice set as written. We do not allow exam-takers to augment their answer choice sets whenever they find the original question to be too difficult. So why do we allow adcom members to do so? It is precisely that sort of behavior that generates cynicism amongst our youth: we require that they answer difficult questions on exams all the time, but then when they pose a difficult question to us, we pointedly refuse to answer.

(By the way, the correct answer is (A): 611= 13*47 )

There may indeed be no hard and fast rules, but there are also almost certainly statistical differences Indeed, such statistical differences are exactly what Tilcsik and Rivera relied upon to discover differential hiring behavior of elite law firms. For example, one would think that participation in a college LGBT club vs. participation in a generic college left-wing politics club or the interplay between one’s gender and one’s interest in polo vs. soccer would be only a tiny element in determining who is hired…but those researchers nevertheless found a statistically significant difference. The upshot of their research is that job applicants probably should not disclose that they are members of the LGBT club, and similarly that women who are interested in polo should not disclose that fact (but men who are interested in polo should disclose that fact).

This is a topic that seems ripe for an audit study.

Actually, I’m not sure that your evidence supports your claim that the national rank of the program doesn’t matter. Of the international students you discussed, how many of them come from bottom-ranked schools from their respective countries? It’s not clear from what you said, but I would venture to say that it would be few. Furthermore, you say that the Americans all came from top-200 schools. Given that there are nearly 2500 4-year colleges in the US, if anything, that evidence seems to indicate that national ranking matters quite a bit. If students were uniformly distributed amongst all 2500 schools (such that you indeed had fellow students from schools ranked below #2000), that would support your claim.

@peterquill, @DadTwoGirls is referring to engineering. There are only about 500 ABET accredited engineering programs in the nation across all specialties, including technology programs. Confining that to ME, one of the most broadly distributed specialties…314. So, the analogy of top 200 to Stanford confirming that prestige doesn’t really play a role, is apropos.

@eyemgh, I’m not sure about that. It seems to me that the assumption that you’re making is that the ~500 ABET accredited programs represent the overal top 500 schools in the nation. This is surely not so.

But in any case, I’ll allow DadTwoGirls to clarify his statement.