“Sure it’s possible the authors are lying or don’t know how to do a regression control for major, but I certainly wouldn’t assume this”
Great, but most studies would show this data as it is critical to gauging the results. Where is it? This study starts with an opening salvo full of anecdotal shock value which feels more like a Gladwell effort than a standard academic paper, and then chooses to withhold critical data.
They make claims that MIT withers the confidence of women, but fail to establish a control group. Why? Where’s the rigor?
This was part of a student initiative to arrive at recommendations for change. I don’t think it merits its use as an academic reference as is being done here, at least in its present form.
“Women have a higher average GPA in high school”.
Great, but that’s not what the paper is about. If we want to discuss that, it would be more interesting to discuss the actual distribution of GPA performance (e.g. show the graphs of distribution not merely trend lines of mean values). I don’t recall where I saw it but there are claims of bimodal distribution of boys academic performance in HS. The lower lobe would struggle in HS while the upper lobe would go to college.
“MIT claims to norm by major. UCSD is simply “engineering” and F are highly overrepresented in Bioengineering. OSU has the breakdown you show. Obviously without having more detailed breakdown it’s impossible to know specific effect of major, but there would have to be a very significant and consistent difference in grading in majors within the same school to account for UCSD and OSU. And even then, it would unlikely to be enough to reflect the 30 difference”
Again, the best that can be said re: MIT is that based on what they show in the paper they haven’t substantiated their claim. Bioengineering based on college gpas I’ve seen have high gpas as a major.
Does this mean that the papers are wrong? No! But it does mean that they are dancing around details that adds clarity to the situation. And their conclusions are hence muddy and incomplete. And I disagree that the sum of more difficult grading and 30 pt difference can’t account for the gpa results seen.
Uh… ok. You disagree the “sum of more difficult grading and 30 pt difference in can’t account for GPA results seen” so I expect you’ll show your work…
Your claim now is that:
The persistent difference in GPA at UCSD “could” be due to easier grading in Bioengineering than the rest of the engineering subjects (and I didn’t go back and check the % numbers, but my guess, in overall Engineering enrollment % the # of F in Bioengineering would not be enough to cause this swing unless the grade inflation were shockingly obvious. And given that some in Bioengineering at UCSD are pre-med, I’m a bit dubious there is that level of grade inflation, but I’ll go do the math later when I have some time.
As to MIT your claim is the persistently higher GPA for F is possibly due to, I guess, “easier majors?” even though the authors claim to have accounted for major and you have no evidence of grade inflation in selected majors.
And your claim as far as OSU is also “easier majors” in engineering (and architecture, I’ll note.)
Ok, but still, we have SAT only claiming FYGPA boost (and I’ll also note that it would not be hard for SAT to extract the ~15% of low GPA/high SAT students and see what their FYGPA is - as that is really the class of student most affected by SAT.
So at worst I think we can conclude that F do “not significantly worse” for GPA (and you’re not addressing grad rates, either, fwiw.) and do as well if not better for graduation rates, despite their persistent Math SAT ~30 point “lesser” scores, on average…
@CaliDad2020 Pretty much any college is free to put into place any or all of the policies that you suggest. That any of them do not means they do not believe its in their best interests to do so. You can say you don’t think its worth it for them. But they have more info than you do. Ohio State has much more detail/background behind the numbers in its annual report than you do. Maybe at some point they will go test optional. Maybe they won’t.
“Ok, but still, we have SAT only claiming FYGPA boost (and I’ll also note that it would not be hard for SAT to extract the ~15% of low GPA/high SAT students and see what their FYGPA is - as that is really the class of student most affected by SAT.”
So MIT’s FY is pass/no credit, not pass/fail, meaning if you fail a class it won’t show up in the transcript. You can also take a a lot of classes at MIT this way so if you want to establish that females have a higher GPA than males at MIT, you have to dig into the classes they took for a grade. What this means is that if a female got a D in a class there, it would not show up, if a male got a B, it would show up and lower his average since I think the averages for both are above a 3. Actually MIT has a 5 pt scale but the idea still holds. Also it might be that females do a better job of managing their GPAs than males, which could be a good skill to have. If a female knows she’s headed for a C after the first test, she could drop the class or change to pass/no-credit. A male might think he’s going to get As on the next two tests and get a B and sticks with the class, only to get a C.
Also if the males are taking more rigorous math courses their first year that would really throw out any conclusions you can make on average gpa of males and females. Taking Calc AB the first semester and then the second half of BC the second semester is different than taking differential equations and linear algebra or discrete math. If that’s the typical case, the the MIT GPAs are pretty bogus.
There is a definitely a difference between engineering majors, with aeronautical and nuclear being the most difficult so you have to again, dig into the things behind the averages.
As others have said or hinted at, the MIT article is a fluff piece and shouldn’t be taken at face value.
“Also if the males are taking more rigorous math courses their first year that would really throw out any conclusions you can make on average gpa of males and females.”
We know this is the case - we do not have to speculate. I already posted the stats, but to recap: the Appendix (p. 44) to the report shows that 9% of males took the hardest math sequence and choices (Group 3), versus 1% of females; 28% of males in the intermediate versus 16% of females (Group 2); but 83% of females in easiest versus 63% of males (Group 1). Also, despite this differential placement, 8% of the females thought their math placement was “too difficult” versus 5% of the males; 16% of both males and females thought their placement was “too easy.”
The Putnam results (team composition, Putnam Fellows and Honorable Mentions) show 0 females at MIT at this rarified level (I think - there is a chance I missed one or two) over the last 14 years, versus dozens of males. Many of these truly exceptional students will have started their mathematics work at MIT at a difficulty level that is literally not captured by the survey.
There is a lot of interesting information that can be mined by looking at the survey results (just concentrate on the significant differences because there are a lot of items). I have zero doubt that females are being granted at least moderate preferences in admissions at MIT. Ensuring that more females are admitted and succeed at MIT is obviously an institutional priority, and keep in mind there is always a subjective component to any grading system.
The model used in the referenced testing did not use the combined verbal and math SAT scores because the variance in verbal test scores did not play a statistically significant role in explaining the variance in the college GPA of engineering/science majors. In this case, the sample was 90% male with no breakdown of the male/female scores.
Please help me here, When making decisions in college admissions, what is the role of the standardized test scores? What is the point in debating the number of questions involved in a score difference if the scores do not have a demonstrably significant relationship with performance in college? This becomes a case of not seeing the forest because the observer is lost in the trees. Eighty points was only used in my example as it fell within the 600 to 750 math SAT score range of the testing sample.
"Or, perhpas, colleges are able to look at core subjects, level of grades, trend in grades, LOR, ECs, and determine: Well, here’s a 4 year 3 sport athlete with 4 consisent 4 hrs. a week of genuine volunteer work… we can assume some of this lower GPA is due to a full and rigorous schedule. "
Because it largely doesn’t work that way. My son made the mistake of sliding into freshmen year like he was still in middle school - only the game changed and HS was more rigorous and required more time. He spent soph, junior and now first semester senior year trying to make up for a weaker freshman year GPA. His ACT is strong and reflects his ability, but his cumulative GPA is a little lower because when he was 14 he wasn’t as mature as he should have been. A couple of schools like what they see (merit money based on GPA and ACT) and understand the demands of varsity athletes and AP students, but two others took a pass. One of the two schools took a pass is ranked 30 spaces behind one of the ones that accepted him. They clearly didn’t care about his trajectory or his rigorous schedule. Thankfully, others do care about test scores. They help paint the total picture.
Some colleges consider grade trends for non-4.0 GPA students. Upward trend is generally favorable, while downward trend is generally unfavorable. Some colleges de-emphasize 9th grade grades/GPA with similar effect.
Whether the standardized test scores have any correlation to the students’ future college success, that’s another discussion as there are many other factors. The truth is, the colleges need to have an objective approach, i.e., SAT or ACT, to evaluation the applicants. For example, there can be a big difference between a 4.0 GPA from a top tier and bottom tier school.
Secondly, do you see a difference between the grade A and B? I personally have seen many kids who received B because they were less than 1% from the A cutoff. 80 points (8 to 14 questions) difference in the new SAT translate into somewhere between 5% and 9% on the scale of 100. Especially now that the competition is fierce everywhere, if you don’t see that as a big difference, I will rest my case.
A lot of armchair quarterbacking here. And in large part based on 3 reports. Colleges have a lot more info than we have. Generally speaking when someone has more info than I do and a stronger vested interest in making the right decisions, I defer to them. Others differ.
Different colleges are different. They have different best interests. What works for one won’t necessarily work for all. Though some seem to struggle with that concept on this site. Some colleges do X. I like X. All colleges should do X. Seems to me you should go with the colleges that do it the way you like. Its why there are different flavors of ice cream.
They explain their methods clearly. In the survey responses, a much higher percentage of women said yes to confidence related questions, such as the different gender responses to the question, “I am a capable student, at least on an equal plane with others,” and there was a change in how women answer this question between freshman and senior year. They didn’t go in to a lot of detail about GPAs in different majors because that wasn’t the point of the report.
If you don’t want to believe that them when they say they controlled GPA for major, you are welcome to do so, but it’s far from an isolated study. The pattern of women having a higher GPA than men occurs in numerous other studies. Another engineering example that lists explicit GPAs for different engineering majors is at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.668.5662&rep=rep1&type=pdf . The specific GPAs are below. Note that women’s GPA of 3.4 is above the combined GPA of all individual majors.
The point was women having a higher average GPA than males is a part of a pattern that occurs in all core HS subjects, as well as in college. It’s not just an isolated fluke event from one study. You also don’t seem to trust college studies since students have different majors, and/or you don’t believe claims of controlling for major; so at HS level you don’t need to deal with major issues. If you want to look at first year GPA before choosing major, the same pattern occurs with women getting higher average GPAs. We’ve been focusing on engineering. If you look in other majors, the same pattern occurs with women getting higher average GPAs.
I don’t think they are neccesarily wrong (although I do think that Princeton Pressure works to some extent and, as I think you or someone else pointed out, as long as we/the student are willing to let them “outsource” their work to us, why should they change.) But 10% is a very small number to subject 1.6 million + kids per year to hours of testing, test prep, stressing, etc.
Again, if it’s not all about the money, money…
Why don’t colleges allow students to self report.
Why don’t the rest of the colleges below even the 20% number tell students explicitly they don’t need tests unless they feel there is a compelling reason they have a low GPA that standardized testing will help show is artificially low.
Why don’t only the most competitive require it. Leave the average common app kid to his single-file common app that goes to the colleges s/he selects.
Again, it’s not a personal issue for me, but I do think it’s a big “make work” project and I’m glad some schools are pushing back.
Its interesting to me that you think testing optional schools are doing so to cut off the money train or to relieve kids of the stress, hassle, etc. of taking the tests. If those schools determined it was in their own best interests to require testing, the next class of admits would be required to take the tests (the money train and stress/hassle of the kids be damned).
Since UCSD #s are fairly easily available: 2013-14 numbers I randomly pulled up are nicely roundable -
Engineering UG
F = 222
M = 841
T = 1063
F GPA = 3.25
M GPA = 3.13
F 73.3% >3.0
M 64.5% >3.0
in 2017 UCSD conferred 91 ABET Bioengineering degrees (Bieng is capped btw, meaing admission is highly competitve. I know of 3.8UW/750 math students who have been denied UCSD Abet Bioeng admission (there are 2 abet programs w/ bioengineering). The breakdown for Bio-engineering is approx 32% F, 66% M (which is a very high F % number for UCSD engineering, but is still far short of a number that could scew GPA in a meaningful way. There are no engineering programs at UCSD with majority F enrollment.)
@ucbalumnus I understand that is what is supposed to happen when schools indicate they are taking a “holistic” approach. Having been through this with 3 kids, it’s not always the case. Some schools just make a hard line on GPA without even looking at how the student got there.
^^ It’s not a matter of believing or not, it’s a matter of accessing if the data thoroughly and clearly supports the thesis. It is not clear, hence I do not know what to do with the results. And while you keep noting their methods are stated clearly, their data on major gpa is not shown at all. (this is no different than a historian asking for original documents with which to judge the conclusions. Why must we rely purely on trust? This is research not marriage) Many of us read studies and reports like this in many fields all the time. This one does not cut the mustard, but here somehow it’s raised to gospel levels of credibility. And that’s the problem: not that we’re discussing an incomplete research report, but rather we elevate its importance more than its due.
To add some more balance to the equation: a Caltech report on engineering and gender. Of specific note is the degree completion percentage by gender: 86-90% female, 92-94% male.
Does this mean that men are better engineering students than women? In my book, NO. But people are arguing the opposite with just as incomplete research.
While I admire your intrepid and well-supported posts regarding test scores, intelligence and outcomes, I think you missed the boat on this comment…
Implicit in this statement is the notion that, if MIT were somehow blind to the sex of applicants, they would admit a class with a greater male/female imbalance (which is 54%/46% for the class of 2021). I’m not sure there is any evidence to suggest this is the case. After all, MIT doesn’t simply accept those with the highest mathematical aptitude. Rather, they accept those they think will contribute to a strong MIT class and be successful at MIT and thereafter. Those are not at all the same thing.