Blossom, you should talk to dfbdfb.
“Maybe students who are more focused on attending HYPSM… are also more likely to be focused on pursuing careers associated with higher incomes than other high achieving students.”
This was controlled for in the article posted in the OP.
“Or maybe students who prefer to stay in their immediate area for college are also more likely to want to stay in their immediate area after graduating, which more limits job options than students who are thinking nationally.”
I am not sure how this has anything to do with highly selective schools, which are spread all over the country.
“I believe the study linked in the OP only looks at students who have graduated with a specific major, so it does not consider things like students having differing chance of graduating or switching out of majors associated with higher incomes at different colleges, missing this type of effect.”
Don’t most kids graduate from college with one major, with a small minority with more than one? The artile in the OP found difference in wages within the same major.
At any rate, I have a more qualitative question. Why is there so much resistance to the idea that it may be true that some colleges prepare kids for a higher paying career than others, ceteris paribus? To me it’s surprising.
Because the evidence is murky, and it’s ultimately a question that should be answered from evidence, not gut feeling.
Some HYPSM… type college have as low as 5% acceptance rates, so only a tiny fraction of persons focused on attending HYPSM… type colleges are identified by looking at those who graduated from HYPSM… Controlling for major is also not equivalent to controlling for those focused on pursuing higher income careers. For example, a CS major who is focused on applying to elite colleges is probably more likely to be the type who is focused on applying to companies he perceives as “elite” after graduating, such as Google, which has implications on salary.
As an example, consider students who grew up in Alabama. A large portion of high achieving students are going to choose to go to University of Alabama, which is not an elite school, but it’s a good school that is likely to give a big scholarship to high achieving students. I’d expect a much smaller portion of high achieving students from Alabama are going to apply to Ivy type elites. The kids who choose to go to Alabama and not apply to Ivies are probably also more likely to prefer to work in Alabama, which has implications on salary.
You completely missed (or ignored) my point. Majors associated with higher salaries tend to have high attrition rates. For example, approximately half of students who start an engineering major switch to a different major. This high drop out rate occurs at highly selective colleges, as well as at less selective one. The study at http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf found that the drop out rate is correlated with HS academic achievement. The vast majority of planned engineering majors at Duke who had HS grades/course rigor/LORs/essays… towards the lower end of the entering class ended up majoring in humanities or social sciences instead, while the vast majority or prospective engineering majors at Duke who had HS grades/course rigor/LORs/essays… towards the top of the entering class graduated with the engineering major as planned. The author states that had they attended a less selective college than Duke where they’d be better academically prepared in comparison to their peers, they’d have a better chance of completing the engineering major, leading higher career earnings at the less selective college. The study also found that women at Duke had a very high engineering major drop out rate compared to men that was not explained by academic preparation. This female tech drop out rate varies tremendously at different colleges for a variety of reasons. My point was the study in the OP won’t recognize such effects since it looks at major after graduating, ignoring the chance of being in the half that doesn’t switch out of the engineering major or being in the ~58% (nationally) who graduate.
As I’ve said, what I have resistance treating a brief news article about an unpublished study with unclear details as gospel instead of looking at the full picture. This full picture includes a variety of published and peer reviewed studies, some of which come to very different conclusions, depending on the methodology and assumptions. I wouldn’t phrase it as “some colleges prepare kids for a higher paying career than others”, but colleges choice certainly does have an effect on career. What is more controversial is suggesting that effect is largely driven by selectivity, which I’d disagree with.
I found it interesting that discussion of the article itself died down within about one page of this 50-page behemoth of a topic. In light of that, I felt it prudent to read the study itself rather than the article summarizing it. I’m going to say that I was not impressed with what I saw. The study had some rather glaring and low-level pitfalls that really do undermine its major premise and conclusions.
So to summarize, the study uses data from 11k college students, in 1993 at graduation and 10 years later in 2003, and creates a few model studies to try to explain correlations between standard variables people associate with predicting success (parental SES, SAT scores, race and gender, GPA, school of attendance, field of study, and so on).
The first bit of dubious dealing is that they note that only 7.7k of those 11k students are actually employed ten years later, but pretty much don’t talk about it at all after that. Seems like something worth explaining. And that leads into my major issue with the study in general: the general opaqueness of their methodology and rationale. They basically just crunch numbers together in a few statistical models and out pop results that nominally support their conclusions. Well, no dice, that’s not how statistics works. You can usually make the results you want pop out of a study if you try hard enough, but the real test of the validity of a statistical analysis is the rationale behind your choices and a meaningful interpretation of what your results actually prove. In fact, the only factors that are immediately obvious to have a substantial influence on earnings ten years later are… gender, graduate/professional education obtained after undergrad, and field of study. Well duh?
The second thing that really irks me about this study is the language they use, that seems to imply that the study authors believe themselves to have stumbled upon some profound truth. One that they aren’t able to adequately explain at that. My interpretation of the conclusion is something along the lines of “our work suggests something different than what everyone else has gotten, and we decline to elaborate any further on why that might be.” Not convinced.
And so, we come to the obvious conclusion that the effect of an elite school is not quite that easily quantified, and the result has been what followed: the same story-sharing and emotionally charged arguments that come up in any other thread about the benefit of elite schools. Same as always.
Because this is a “parents” forum and what’s at stake is potentially (a) a lot of money, and (b) our kids’ futures? So if you’re an upper middle class parent trying to balance college expenses against retirement savings, it’s reasonable to want a pretty strong justification before paying a big price premium for an elite college, especially if the alternative is a good state flagship at half the net price.
As others have pointed out, for many families the choice isn’t that stark because after aid the net costs may be much closer or even favor the elite college. There are many reasons to pay some price premium for a much more selective, higher-ranked school. Better earnings potential may be one of them, for some careers.
There’s a less personal reason to resist the idea. Elite, arts-and-science-focused colleges are in the knowledge business. They want and expect many alumni to choose careers in academia, the arts, or public service. They do have an interest in attracting some students with high earnings potential, in hopes they turn out to be big donors. They begin to worry when 20, 30, or 40 percent of their graduates start chasing high-paying careers in investment banking and business consulting (http://www.harvard.edu/president/speech/2008/2008-baccalaureate-service).
Still, some CC parents do believe a more selective college is worth a significant price premium mainly for the financial ROI. If good research supports that belief, fine, let’s see it. As far as I can tell there isn’t a very strong consensus either way.
“Because this is a “parents” forum and what’s at stake is potentially (a) a lot of money, and (b) our kids’ futures? So if you’re an upper middle class parent trying to balance college expenses against retirement savings, it’s reasonable to want a pretty strong justification before paying a big price premium for an elite college, especially if the alternative is a good state flagship at half the net price.”
Thank you. Finally!
So, to summarize, parents NEED to believe that there is no earnings boost from elite colleges and universities as that frees them from the guilt of choosing their own retirement over their kids’ future.
“You can usually make the results you want pop out of a study if you try hard enough, but the real test of the validity of a statistical analysis is the rationale behind your choices and a meaningful interpretation of what your results actually prove. In fact, the only factors that are immediately obvious to have a substantial influence on earnings ten years later are… gender, graduate/professional education obtained after undergrad, and field of study. Well duh?”
I thought the study showed that controlling for gender and field of study, elite grads still have an advantage. Point well taken about grad school education though. A college degree is dime a dozen these days.
This is why I feel an elite college education is beneficial. It’s really three things.
- Peer pressure. When everyone around you is a Type A person, you feel encouraged to do more as well. This includes applying to grad school.
- Reputation effects for the first job. It’s “uuuuuge” to quote Trump.
- Lifelong network effects.
Personally speaking, I know that these three things have been the main drivers behind my trajectory in life. Of course, at some point grad school took the place of college, and work place took the place of grad school, but it was always these three things - peer pressure, reputation of the last place I was at, and networking benefits.
@1Wife1Kid, if you claim that parents “need to believe” that they don’t need to pay extra for a high-prestige college education for their children to, apparently, assuage their guilt, well, don’t you realize that the same (really, really bad) logic can be used to say that parents on your side of the argument “need to believe” that they need to pay extra for a pointlessly overpriced education for their children?
dfbdfb, Absolutely. I was just trying to understand why this simple statistical exercise is so emotionally fraught. It cuts both ways.
PS: Speaking personally, financial reward aside, I would like my son to go to the most competitive place that he can get into. I believe in peer pressure.
The three factors I mentioned are ones that are immediately obvious from just briefly looking at the data. The correlation is notable and highly significant, but I’m sure we are all 0% surprised that gender, field of study, and graduate education are correlated with earnings.
As for the controlled benefit of elite education: nominally, yes, if you just take their word for it then that is what their results show. Though if you look into their models, they suffer from a mix of doctored statistics and tactical ambiguity on their methodology. As I mentioned, you can literally spin statistics any which way to support whatever conclusion you want, and what really matters is not the model, but the core assumptions and experimental design that justify doing the experiment as you did. That the authors either failed to think this through, or were tactically ambiguous about it, makes me extremely critical of their conclusion.
Statistics is pretty easy to lie with if you want. You always have to investigate the claims made in detail because you simply cannot take their word for it that their statistics are well-considered.
NeoD, My trouble is, you accuse them a lot, but don’t exactly back it up with anything concrete. Yes, statistics can be used to lie. But using that as a generic rebuttal of any statistical finding is weak, just as weak as saying that perhaps there was a variable that was not measured, and not particularly helpful.
So, what error did they make? Go ahead, use the statistical lingo if that helps explain. I will understand.
[url=<a href=“https://www.gc.cuny.edu/CUNY_GC/media/LISCenter/Readings%20for%20workshop/Attewell.pdf%5DHere%5B/url”>https://www.gc.cuny.edu/CUNY_GC/media/LISCenter/Readings%20for%20workshop/Attewell.pdf]Here[/url] is what I found from them as far as the actual study, not just the article.
I explained my issue with them in full, because it’s really quite simple. They just simply don’t justify anything about their study. Why don’t they include the not-employed in their model, and what happened to them? What support do we have for thinking that this dataset is a good one? Why use Barron’s selectivity rankings for selectivity? What justifies the specific methods they decided to use, and the specific variables they decided to study? Why is it that this study contradicts the previous studies they cited on the matter? Nothing on all counts.
I mean, I could go into the statistics themselves, but to look at only the technical details is to miss the point entirely with regards to how you actually use them. The study itself is so poorly posed, as I have described, that we don’t even really need to go into detail on the other parts. It’s just a bad study that was made into an article that has the characteristic bad science of most such reporting, and as far as I’m concerned that’s all that needs to be said there.
The linked study was published in March 2015, so if it is the study from the article, then this is not new research. The goal of the study appears to look in to how “class”, as defined as parents’ income, influences future earnings. College selectivity is one of the many controls, rather than a focus of the study. The results seem a bit different from the article, such as a 3% difference in predicted earnings between “most selective” control and “very selective” control rather than the 8% listed in the article, but it did show that persons attending more selective colleges were predicted to have higher earnings in their model.
I think the key in this type of research is distinguishing between cause and correlation. For example, the control variable that had the most influence on career earnings in the model was gender. It’s highly unlikely that being female causes lower earnings, but it is almost certainly correlated with lower earnings. Reasons for that correlation might include things like women being more likely to enter fields associated with a lower salary, gender bias in the workplace for salary/promotions/…, and averaging fewer years worked. If they were investigating the cause of earnings differences for women, they might look in to such questions and add related controls.
I’d ask similar questions for college selectivity. Does attending the “elite” college cause the increased earnings, or does it more relate to typical students at the “elite” college being different from typical students at less elite colleges in ways that aren’t encompassed by the controls… These controls have limited granularity, such using the SAT quartile the student is in (across all 11k students), which is probably going to be top quartile for nearly all students at “elite” colleges; or the choice of 4 major groupings the student completed (STEM, Business, Education, or Other), which does not encompass things like CS majors have very different salary prospects than Biology majors (assuming no grad degree); or grouping all grad degrees together when a MA and MD likely have different salary prospects. The limitation to these controls is hinted by the final table in which they added controls for the niche in which students worked. With this niche of work control, the earnings disadvantage to being lowest “class” was cut in ~1/3. They didn’t specify how other controls changed in the model, like college selectivity, since that were not the focus of the study. However, 'd expect the decrease to be substantial, like it was for “class.” If they were investigating why grads of elite colleges earn more, they might look in to such questions and added additional related controls.
I think the study adds a piece to the puzzle, but it is important to not view it in isolation or draw sweeping conclusions without considering the rest of the research on this topic.
The article I linked is, from what I can tell, the only related actual written copy of what they talked about at the conference. That the data and conclusions in the article seem to line up almost exactly with what that linked study talks about leads me to believe that they are one and the same, and that this was simply a conference presentation of somewhat older data. I would say that it’s likely that they used the conclusions in the study in the actual presentation, from which the article was made, but compressed them for the purpose of being more palatable as a presentation.
That the study didn’t really ask the tough questions and took many important things for granted, while also making sweeping claims, makes me very suspicious and I would venture to say that this study is some bad science.
Where is the link to the published article? I want to see how strong the relationships are, regardless of whether it’s correlation or causation, on a scale of 0-1, from no relation to nothing else matters.
I have relatives who went to top schools, got hired by the big techs, and make good money. But, I want to know how much the schools contributed to the outcome.
The discussed article is linked in the first post. The link to the 2015 study is in post #752. The scale in the study is based on how much the listed control influences their earnings prediction in their model. When using all controls for the different selectivity groupings, they found “highly selective” colleges had the highest predicted earnings, which includes colleges like University of Georgia, University of Florida, Texas A&M, and Clemson. “Most selective” colleges had 0.2% predicted lower earnings, which includes colleges like Harvard, Stanford, and MIT. “Very selective” colleges had 3% predicted lower earnings, which includes colleges like Georgia State, Florida State, and Louisiana State, “Selective” colleges had 7% predicted lower earnings, which includes colleges like University of West Florida, and University of West Alabama.
The college earnings-selectivity predictions varied with “class”, which is consistent with the other study I linked. Among parents who had incomes in the upper 1/3, there was a 1% difference in predicted earnings between selective colleges (University of West Florida/Alabama) and the very through most selective college groupings. However, among parents who had incomes in the lower 1/3, there was a substantially larger 8% difference in predicted earnings.
Data10, Thanks for going into the details of the study. So, Most selective colleges have statistically significant lower earnings than highly selective colleges adjusted for all other factors like major, gender and socio-economic background? If true, this is huge.
Be careful of statistical significance! At a suitably large sample size, effectively any difference can be statistically significant. You have to look at effect sizes, which, at least according to my scan of the draft paper, Attewell and Witteveen don’t do (even though, as people who work in big-data data mining, they really ought to know better).
If you call a 0.2% difference in predicted earnings statistically significant among a sample size of ~200 attending “most selective” colleges. The authors did not mark it as reaching p<0.01 or p< 0.05 significance levels.