@Data10. Your comment about your A+ brings to mind the only time I ever gave a student an A+ on a paper. (It wasn’t an option on the grade report or the official transcript.) After I handed back the papers, a student asked about the grade distribution. I described it, saying that one student received an A+. “What does an A+ mean?” the student asked. “It means,” I replied, “The paper is better than I could have written.” I recall the look on the students’ faces. They were impressed.
@Data10 That is not how we understand cognitive ability. The most accepted theory today is a 3 tier pyramid with g at the top, broad aptitudes in the middle, and narrow knowledge-based skills at the bottom. These attributes are all positively correlated.
Here is a graph showing how various majors perform on the GRE:
http://2.bp.blogspot.com/_otfwl2zc6Qc/SQ9Gpp-yuXI/AAAAAAAAHnc/nIEJ3uK0TkQ/s1600-h/gre.bmp
Notice that humanities students do better on Verbal and science majors do better on Quantitative? Nothing surprising there. I am much more interested in the Analytical because I think it is a “purer” test of reasoning abilities (because it is neither heavy on Verbal nor Quantitative). STEM students are clearly ahead in that regard.
Out of curiosity, I decided to look specifically at verbal reasoning. If your claim is correct, and the “pyramid” theory is wrong, I should expect the humanities majors to dominate. Here is what I found with LSAT scores:
http://www.people.vcu.edu/~emillner/Economics/lsat.htm
I tried to look for confirmation with the Miller Analogy Test, but I could not find any data based on majors. Monson and Nelson’s paper, however, suggest the rank order holds for GMAT as well.
Without looking any further, I can see why Bock claimed the young woman is making a serious mistake in switching out of electrical and computer engineering (although I disagree with him). It also shows why Bain and BCG (Munzi article) want to see “really tough quantitative or analytical classes”, and “have done very well in them” even if students choose to major in history or literature. My feeling is that these elite employers know what they are talking about.
I think there are some schools that have created gut majors (usually sports something or other) just for their athletes.
My post stated, “Different students have different strengths.” … " One is often not a universal A student or a universal B student. Instead Zhou may be an A student in her major and B student in EECS. I might have been the reverse and be an A student in EECS and a B student in public policy."
Nothing you wrote seems to contradict my statements. If anything, your link suggests that different students have different strengths, as I wrote, and students tend to gravitate towards majors that fit with their strengths (and likely their interests). For example, students with a similar verbal and quantitative score tend to gravitate towards humanities, while students with a notably higher quantitative score tend to gravitate towards STEM. Although this effect may have as much or more to do with the skills learned and practiced during college than a pre-college skillset. For example, a humanities major might not take any math courses during college. He won’t be in good practice for related GRE questions, so his score may suffer. Similarly an electrical engineering major might do a lot less reading and writing than a humanities and might have had less practice developing his vocabulary during college, so his score may suffer.
Different employers are looking for employees with different skill sets since different positions require different skillsets. I work at a tech company that hires some new grad engineers. For related engineering positions at my company, they are looking for candidates that have solid engineering skills and experience. The interview questions place a strong focus on such engineering knowledge… If you instead look at employers hiring new grads for therapist related positions (I realize they generally hire students with grad degrees instead of undergrad), they’d be more interested in candidates with a psychology background like Zhou, and would want candidates with strong interpersonal skills. I’d expect Zhou would have both more appropriate background and skills for such a position than I did at that age, even though I was more successful in EECS than Zhou. I’m not familiar with Bain and BCG, but if they are hiring analyst type positions, it not surprising that they would want to see analytical skills.
Your point is that a mathematician and a farmer have different skillsets; comparing them is like comparing apples and oranges. My point is that while a mathematician can become a farmer, a farmer is unlikely to make it as a mathematician given even the best of opportunities. Can’t we both be right?
Instead of having lay people arguing over something technical, how about having an expert have the last word on this topic? Here is Jonathan Wai of the Duke University Talent Identification Program:
http://qz.com/334926/your-college-major-is-a-pretty-good-indication-of-how-smart-you-are/
In the world of programming or engineering, a clear demonstration of one’s competence is possible. There are, however, a lot more jobs where true, complete proof of skills is not. Often the work does not even allow one to quantify his/her individual contribution. If all that is required is a degree-any degree, then how do you separate the applicants?
Based on the responses I have seen on CC, the answer is probably college prestige and/ grades. I personally think your major is a better filter. Not only does it tell me about your reasoning ability, it tells me a lot about your conscientiousness as well. The best method of all is to use standardized testing. This way I would not be fooled by a hooked grad from the elites, or miss the history major who is mathematically brilliant.
As long as we are governed more by our biases, emotions, and egos than by reason, a standardized exit exam is the best way to minimize the impact of our human foibles. Here is an article on how in hiring, algorithms beat instinct:
http://hbr.org/2014/05/in-hiring-algorithms-beat-instinct (link is external)
For those who find this stuff as interesting as I do, here is an excellent summary of what we already know, and where further research is suggested:
http://www.annualreviews.org/doi/full/10.1146/annurev-orgpsych-031413-091255
Canuckguy, it seems to me that filtering by individual scores/records would be better than filtering based on major (unless the job requires specific background knowledge or skills acquired in the major). For example, if you look at the 2014 SAT average V+M scores from the study you cited, the top of the group are the prospective mathematics/statistics majors at 574, followed by the physical scientists at 571.5 This probably means that weaker students are somewhat less likely to choose those majors, but there is a whole lot of room above the average in both fields. It would not take much for an individual psychology major to outscore them, even though the typical psych major does not. Similarly, the data that you cited on average LSAT scores seemed to reach a high somewhere around 157 (as I recall), for the college major with the highest average score. Since the LSAT goes up to 180 (unlike the current GRE) that’s not exactly an unbeatable score.
Obviously, if specific coursework background is needed, it makes sense to filter by major. For example, I would not suggest hiring the top mathematician for a position as a civil engineer.
My point was that a different students have different strengths. Zhou struggled more in EECS than P/PP, while I was the reverse and struggled more in rote memorization or subjective writing classes than in EECS. Our society generally praises tech fields more than humanities, so you are more likely to see students who could succeed equally well in both fields choosing a field like CS or EE instead of humanities. However, this does not mean that there are not a huge number of exceptions. Having taken a lot of classes in electrical engineering and having worked in electrical engineering related positions, it’s obvious to me that there are a lot of engineers with poor interpersonal skills. Such engineers are likely to do quite poorly at sales positions, therapy positions, or much of anything that requires a lot of interaction with customers even if they were successful in a “tough” major. Similarly, there are many people who go into humanities out of legitimate interest in humanities, rather than a desire for money or meeting societal goals. Many of this group could be successful in tech fields, but choose not to.
Duke did a study that looked at which components of the application had the highest correlation with switching out of a “tough” engineering or natural science major at http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf . A summary of the regression correlation values are below, listed in order from most to least correlated with switching out of a “tough” major.
Being Female – 0.18
HS Curriculum - 0.17
HS Grades - 0.09
Application Essay - 0.07
LORs - 0.063
Being a URM - 0.059
Test Scores - 0.057
Negative Personal Qualities - 0.006
Note that ability as measured by test scores was one of the least influential components of switching out of the “tough” major. Instead things like how prepared the student was via their HS curriculum was far more influential. Women were much more likely to switch out of a tech major than men after filtering for comparable curriculum, grades, essays, etc. This likely indicates that many women are switching out of tech majors for reasons unrelated to ability… perhaps things like lack of interest in working in a tech field or feeling uncomfortable in classes full of males with few female role models. This suggests you are going to find plenty of women who could work in a tech field, but choose to work in a humanities or soft science field instead.
Your post mentioned a farmer becoming unlikely to cut it as a mathematician. My grandfather initially worked as a farmer, then after coming into some money he used some of the money to medical school and became a doctor. He worked as a doctor in the same farming community where he previously lived, and continued farming while practicing medicine. Many pursue farming for reasons other than lack of innate ability in higher paying fields, particularly among those who grew up around farming. Of course one needs a stronger educational background to become a mathematician than to be a farmer, and the farmer is unlikely to have that background.
@Data10 - The paper you attached was very interesting. Thanks for the link. I’ve never seen a study like it before.
Unfortunately, you’ve misinterpreted the study’s results. You seem to be quoting from Table 13, specification 3, on page 26.
The problem is that all the variables are on different scales. For example, the SAT scores are out of 1600 but items like HS Grades and LOR are on a 0-5 scale, so of course the Test Scores coefficient is much much lower since it’s multiplying something that much much bigger. You need to normalize for the different standard deviations of the inputs. A quick and dirty way to do this is to divide by the standard errors of the estimates. The results, sorted from most important to least important, are
Female -3.75
HS Curriculum 2.96
Test Scores 2.11
HS Grades 1.71
LOR 1.37
URM 0.83
Personal Qualities -0.12
So you’re right that all other things being equal women are switching out of stem majors; they have an 18% higher chance which is pretty big. But curriculum rigor and SAT scores (as well as grades) were pretty important predictors of persistence in “hard” majors as well.
In fact, since rigor, grades, and SAT scores are all very correlated with each other, it might be fairer to lump them into one aggregate “academic measure”. Then you could compare the academic measure to things like Letters of Recommendation and Personal Qualities (which is negatively correlated with persistence !) If you do that, this one academic measure becomes the single most important predictor of persistence in “hard” majors.
This is not a surprise. C+ high school students don’t typically become A students at Duke. If rigor, grades, and scores weren’t important predictors of academic success then admissions officers should switch to using dartboards
In fact, one of the whole points of the paper is that just looking at naive GPA, without controlling for grading rigor, can be misleading. I think it’s fair to say that if you want to judge newly graduating interviewees on merit, then in addition to looking at the grading rigor of one major vs. another you should also look at the grading rigor of one college vs. another. But this all depends on the type of jobs you are trying to fill and how important academic knowledge is to success. There are lots of great jobs like sales where academics simply aren’t that important. Most companies don’t need rooms full of nerds. But this paper doesn’t address that issue.
Many companies DO need rooms full of nerds- to develop currency hedging strategies, or to use big data to identify who is taking your drug vs. a competitor’s, and what their outcomes have been segmented by age, demographics, and how ill they were when first diagnosed and how many other treatments they were prescribed before using yours, or to figure out what your capital reserves need to be in order to comply with banking regulations AND not to lose out on growth opportunities by being " recklessly conservative".
But companies also need rooms full of people who are sensitive to design and visual arts, people who are gregarious extroverts with active listening skills, people who speak several critical languages, people who write well both for expert audiences and for the general public, etc.
Only on CC does one skill (good at math, or getting good grades in one type of subject) make for a golden ticket.
They mention normalizing the test scores to 0-1 scale instead of 1600 scale, so if some of the other application components are still at a 0-5 scale, then test scores are multiplying something much smaller, not something much bigger. So this effect would increase the printed regression coefficient for test scores, rather than decrease. However, I see your point that one needs to consider if SAT score increases chance by 5.7%, how much of a difference in SAT score is that in relation to how much of a difference in course rigor, essays, etc.
@Data10 - They normalize the SAT to have standard deviation 1 (N(0,1) is a normal distribution with mean 0 and standard deviation 1, not a 0-1 scale). The standard deviations of the other variables are given in Table 1 and are all below 1, so the test scores are multiplying something bigger, not smaller. That’s why they’re showing up as significant at the 5% and 1% levels in Table 26 despite their lower magnitude.
The study doesn’t give us the statistics for the precise subsample that we need to fully correct for all this. But the calculation I did in the previous post should be a quick and dirty estimate of the effect sizes relative to a normalized one standard deviation change in the variables.
To be honest, the magnitude of the effect for the academic variables is actually probably even bigger than my last post would indicate. The reason is that we have a “restriction in range” problem due to the fact that Duke certainly selects on SAT scores, HS grades, etc in their admissions process. I don’t think there is much selection effect on Female vs. Male, so the magnitude of the Female variable’s marginal effect is almost certainly overstated relative to the academic variables. But the study’s authors forgot about this or ignored it because it didn’t help their point so they weren’t interested in it.
@QuantMech You are absolutely right. I think it is important to stress that we are talking about group averages and not individual aptitude. This is what statistics do. I should have done it earlier in my posts. Thanks for reminding me.
Mathematics in essence is abstract reasoning, problem solving and pattern recognition. It is what psychologists call fluid intelligence, or critical thinking if you will. Learning math is to learn to problem solving and to think creatively within logical and rational parameters. One can easily translate these crucial skills into other endeavours of one’s own choosing, and many have. That is the reason why disparate firms like Google, BSG, and Goldman all value such skills so highly.
Imho, what makes the “hard” subjects hard is the math that is embedded in those disciplines. The weaker students switch out of them because the stuff can not be fudged. In the Duke study mentioned, the natural sciences, engineering, and economics were considered “more difficult, associated with higher study times, and are more harshly graded than their humanities and social science counterparts”. It is no co-incidence that these are the same subjects that are associated with high standardized test scores as well.
@al2slmom I think the “behaviour” of the female variable can be explained by the fact that males have fatter tails at the right end of the normal distribution curve (actually, they are fatter at both ends):
http://professionals.collegeboard.com/profdownload/sat_percentile_ranks_2008_males_females_total_group_math.pdf
Some time ago I looked into the participants at the International Mathematical Olympiad for that year. Out of the top ten countries, only Japan, if my memory is correct, had a female participant. Remember that thread, @QuantMech? If the data for the SAT (the math section is too easy) is not so squeezed at the top, I am certain we will see it in sharper relief.
Please don’t shoot the messenger.
If the issue was women tend to have lower test scores at the high end, then when controlling for test scores, we should see the regression coefficient have a large drop. The specific numbers for females leaving the “tough majors” with different controls are below:
Controlling only for ethnicity – 0.19
Controlling for ethnicity and test scores - 0.18
Controlling for ethnicity, test scores, grades, curriculum, and other application ratings - 0.18
Controlling all of the above + harshness of grading in individual Duke classes - 0.19
Test scores and the other additional controls did not have much effect on the coefficient for females leaving the “tough majors”. However, if you do the same comparison for African American students, the result is very different:
Controlling only for ethnicity – 0.26
Controlling for ethnicity and test scores - 0.12
Controlling for ethnicity, test scores, grades, curriculum, and other application ratings - 0.06
Controlling all of the above + harshness of grading in individual Duke classes - 0.02
This suggests the difference in average application stats and HS academic preparation between white and black students is closely tied to why black students are more likely to drop out of the “tough” majors at Duke, but not female students. Instead a good portion female students are leaving for other reasons.
Many have different theories about why effect occurs. The Duke study linked another study as a reference about this issue, which found women were notably more likely to stay in the “tough” majors if they had a good proportion of female professors than if they had male professors. This effect of professor gender was greatest among female students with high math SAT scores.
Several studies of female participation in the International Mathematical Olympiad have been reported in the Monthly Notices of the American Mathematical Society. These tend to suggest strong cultural influences on the participation by young women. For instance, there are striking differences (pairwise) between East and West Germany (before reunification), the Czech Republic and Slovakia (after the split-up of Czechoslovakia), and Japan and Korea.
I agree with Canuckguy that the SAT math is too easy to show much of anything. However, it is interesting that the ratio of males to females in the 700+ range by age 12 has been dropping, from a value somewhere around 12-13 at the time of the first Stanley/Benbow study to something in the range of 3-4 now. I agree that at present in most countries the tails on the male performance distribution extend further out than the tails on the female performance distribution. The Republic of China (Taiwan) seems to be an exception at least in one study, from the AMS article.
I think that when the results are changing over time, as is the culture, it is premature to suggest that we are seeing biological differences–except, perhaps, as the differences in hormones affect one’s interaction with the culture as it now stands.
To add a personal comment: I am female, and I am not all that old. However, I know that at least two members of my STEM department were categorically opposed to hiring women, at the time that I was hired. I learned this later on, in one case directly from the senior faculty member, and in the other case from people who were at the faculty meetings, when I was hired. The cultural influences cast long shadows. As an undergrad, I had no female professors in any STEM field, and only a few in other areas (1 in history, 1 in philosophy + 2 TA’s in the introductory level of a language).
When the numbers stop changing over time, then they are worth interpreting.
@Data10 The numbers mean nothing. The study was never designed with male-female differences in mind. The SAT is too crude an instrument to utilize that way. Why, the researchers did not even separate the M from the V score, so if there is a difference between the sexes, the information would have been smoothed away.
You may find this study interesting though:
https://www.psychologytoday.com/files/attachments/56143/intellectual-outliers2012.pdf
@QuantMech The ratio is very stable for over two decades now (p 384). Is a ratio of 4:1 or 3:1 not enough to partially explain the behaviour of the Duke women? Is this the kind of stuff that got Larry Summers in hot water?
I think a more interesting question is what caused that “earth- shattering” drop from 13:1 to 4:1 in a decade. None of the reasons you gave can possibly do that; the drop is simply too big and too fast.
Since neither race or gender politics hold that much interest for me, I will leave that to the rest of you. My interest is simple-how to judge a job candidate fairly and squarely. Outside of standardized testing, I know of no other way. The tests should be made harder in order to do a better job of teasing out the differences in the right tail, however.
One last thing, “range restriction” goes a long way in explaining Bock’s cavalier attitude towards hiring at Google. It is probably not wise for most companies to try and follow suit.
As I see it anyway.
Reality check from my D who is a student at Carnegie Mellon and is home for spring break right now: GPA is extremely important for getting into grad school/job and the students at CMU are well aware that there is ZERO grade inflation and that they are competing against kids from schools that routinely grade inflate. At a certain level, these kids feel that some companies don’t fully realize the difficulty of CMU and how a lower gpa may work against them when getting a job or applying to grad school. That’s why all these kids freak out about keeping their gpa as high as possible.
I don’t know of a single colleague in the recruiting community who is unaware of CMU’s grading policies and the tough curve. This cuts across tech, finance/banking, strategy consulting, consumer products, VC’s/Private Equity, aerospace/transportation, etc.
I can’t address grad school- that’s not what I do for a living. But corporate employers know what they are looking at when evaluating a transcript.
Not politics, it’s economics, I think. Since 75% of population is women and minorities, gender and race come in. Encouraging an environment for them to do their best may be beneficial in the long run. You end up with a bigger pool of talents. If women/minorities are at least a third as strong as white male, the effort will pay off.
The Duke study focused on the reasons that African American students were leaving “tough” majors and applied those same analysis criteria to other races and genders, and it is valid to so. You may choose to write off their analysis as not looking at scores in enough detail or not separating the top x% enough, but they also look at other academic criteria that is correlated with scores, like HS grades and HS curriculum. Test scores along with HS grades + HS curriculum all explained little about the higher rate of females switching out of “tough” majors. This all suggests that is a notable non-acdemic/non-score component. They specifically discuss this effect in the study and give references to other studies for more reading, such as the one at http://www.econ.ucdavis.edu/faculty/scarrell/gender.pdf , which does separate by high SAT subscore. This Davis study found that there is a notable gender difference in rate in dropping out of “tough” majors among students with high math SAT scores, and that having female professors greatly reduces that difference among this high scoring group. A quote is below:
@blossom, glad you are aware of CMU’s grading policies. My D just said that in one of her higher level stats classes, her prof went off topic to speak to this, so it is a real thing.