I guess one way to define the best college for math is to count how many of the top math contest winners each college recruits, but education isn’t the nba, and unless you happen to be a top math contest winner, I imagine other criteria might also matter as well.
“An emphasis on access and outcomes for students from lower income families will tend to push the prestige privates (especially LACs) down and many public universities up.”
One way or the other, and even if all of these rankings lists should probably carry a government-mandated “For entertainment purposes only”, it’s still a good day to be the owner of a handle like “ucbalumnus”, isn’t it? : ) I know Berkeley was a terrific school forty-five years ago when I was looking around, and I’m sure it’s only gotten better.
This just begs the question.
One of the things about the ranking discussions that I learned over the years here is they just amount to the same people making the same arguments all is if its a de novo discussion. You could look back at prior discussions of the various rankings (what is “elite” and whether its worth it as well) and see the same people making the same statements. Noone’s views are changed. Like much of the internet just a time waster. Could expect more from a site dedicated to higher education but guess not.
“I also have personal knowledge of smart but poor minority kids that were admitted to high ranking universities for engineering or physics and almost all of them switched major by their junior year.”
Here is perhaps a “baseline” (not-very-POC*) data point for you: My entering Engineering class at Brown (1975) had 200 people. By junior year, that was down to 100 and then pretty much stayed there. So 50% attrition.
One thing I noticed was that classmates who hadn’t had at least a year of Calculus in HS especially struggled (fair to say, I think, that we all struggled at first no matter how stellar our HS careers). Though the curriculum was set up to delay Calculus in the Engin courses long enough for them to catch up, trying to get a handle on both subjects at the same time seemed to be what drove some to say “enough” and switch to something else.
- Brown 2025 clocks in at 48% POC. Sorry I don’t have any accurate POC or URM numbers to give you for back then. I do know our engineering class was about 15% female. Maybe 5% Black (just guessing). Not much in the way of Asians. Mostly White.
Thanks for your insight. I’m also an engineer and struggled mightily along with everyone else my first year of school. This was in Europe and while we had socioeconomic diversity we didn’t have any racial diversity. I don’t know what the drop out rate was but it was probably under 5%. It was much much much more difficult to switch major however.
I’m swayed by data. Like this awesome collection of student outcomes from CMU for each college:
But with a #25 ranking, are they twice as good as #49 Purdue or Ohio State? Neither of the latter provides such a robust list of student outcomes.
CMU provides darn good outcomes. So does #28 UNC which has its own placement lists.
I know a couple URMs who graduated from MIT in engineering. They weren’t rich prior to college but siblings went to top schools so they knew the path.
As far as I can tell, U.S. News doesn’t understand the concept of a bar graph (e.g., see “Full Admissions Details” → “SATs on 1600 scale” across several schools).
Even if MIT consistently gets 3-4 such students per year, they make up only about 5% of MIT’s math majors. It is not like MIT’s math undergraduates are all IMO medalist level. Nor does it mean that the rest of MIT’s math majors are all better at math than the math majors at some other school.
These students are more likely to be among the best students in that major than other students at the same school, but aren’t necessarily so. There’s an overlap, however small, between the distributions of the medalists and non-medalists in terms of their successes in their field/major.
It’s difficult to measure attrition from a major by just looking at number of graduates, without knowing how many people were considering that major upon entering the school. For example, Harvard’s freshman survey indicates 28.9% of the class of 2024 were interested in a sciences major. Harvard class of 2024 senior survey indicates that 25% completed a science major, suggesting a some changed their mind. It’s unclear what are demographic characteristics of those who switched majors, but at many colleges, students from weaker HS backgrounds are less likely to persist in math heavy majors, and those students from weaker HS backgrounds are more likely be lower SES.
That said, it is true that only a small portion of Harvard physics majors are Black. Harvard’s website indicates that ~5% of physics concentrators are Black. And IPEDS indicates that ~3% of physics grads are typically black compared to 8% of the overall student body being Black (using IPEDS racial definitions). There are few Black students at Harvard, and the few who are admitted seem less likely to be physics majors than average.
I believe Gladwell was talking about Dillard in the podcast, which has a 40% graduation rate, so many prospective physics majors do not graduate. The racial demographics of Dillard’s physics majors seem to have a lot of fluctuation from year to year… more so than Harvard. Some specific numbers are below. Again, it’s not clear how many started out interested in physics, so it’s not clear how many switched out of the major or why. I don’t think we can draw many conclusions.
2017 – 2 of 2 were Black
2018 – 1 of 4 was Black
2019 – 13 of 14 were Black
Rather than looking at absolute numbers like this, I’m more impressed with colleges that surpass expectations. For example, colleges that admit average kids, yet still have far above average outcomes. This can include higher than expected graduation rate, STEM persistence, job outcomes, etc, Essentially a college that makes a difference… for which a particular student is likely to have a better outcome than elsewhere, rather than a college that admits kids who are likely to have great outcomes and does not surpass expectations based on the students admitted. .
Many HCBUs do well by such metrics… with higher graduation rate and other performance measures than expected based on average stats, income, and demographics. I suspect Harvard also does well by this metric with financial aid that makes the college near $0 cost to parents with below average income, 97-98% grad rate for groups like Pell and Black, special programs and support measures to assist students from weaker backgrounds who want to pursue STEM majors, median senior survey GPA of ~3.85 supporting grad/profession school admission, etc.
I’ve been told by “experts” here that this is all fake, biased data.
I was going to put that in the original post, but I held off.
Along with "and plenty of “I know a guy” anecdotes to “prove” something and “that data’s not perfect so I discount it” posts. Like clockwork.
But carry on.
100% agree with you on this point. I suspect that many colleges try to make a difference in all their students lives but it’s not clear that we can successfully rank the ones that are successful. On the other hand I think that blaming a big portion of society’s ills on our higher education system is misguided. It’s an interesting question to ponder and nowhere in the US News rankings do I expect to find an answer.
A very small portion of the USNWR ranking is based on “graduation rate performance”, meaning how the college’s graduation rates compare to that expected from the admission characteristics of the students. However, USNWR gives it less weight than raw graduation rate.
In theory, a college that somehow gets more than the expected percentage students to graduation with better academic support, better advising, and better financial aid should be better in terms of treatment effects, rather than just riding on selection (of students) effects that raw graduation rate does.
However, a college can also game graduation rate performance in a negative way by making the courses and requirements too easy, at least in some areas (e.g. scandals where lists of “easy courses for recruited athletes” were discovered at some colleges). Having fewer or easier general education requirements or an open curriculum can be another way to allow students to avoid their weak subjects, which may raise graduation rates at the margins.
Pretty sure Georgia requiring test scores.
Williams, Amherst and Swarthmore are the top 3 yet again and I think they have had the top 3 for over 15 years now? I believe Swarthmore was Number 1 at one point?
Well, in a recent year it was closer to 15% IMO participants when you counted the international students.
To put this in perspective, if you took a handful of math students at random from MIT, you wouldn’t be least bit surprised if one of them was an IMO participant. The same is not true at Michigan.
I didn’t suggest that at all. There is a range of talent at every school, and there is significant overlap between the math majors at Michigan vs that at MIT. For that matter, I expect that the math students in a state flagship Honors College have significant talent overlap with students at MIT.
But re Michigan specifically, it used to be pretty common for math students to apply EA simultaneously to MIT and Michigan (Michigan used to respond before the end of December, making it a great EA option). While admissions for any one person is imprecise, we can expect that MIT got its admissions right in the aggregate, meaning that they picked what they thought were the best students they believed would fit at MIT. And if a student was fortunate enough to be accepted to both Michigan and MIT, the vast majority would choose MIT, leading to a talent separation between the two.
This seems like such a basic concept–that student bodies can overlap and yet be different–that I don’t understand the resistance to accepting this.
I think it is erroneous to assume students accepted at both MIT and UM but who chose MIT are the “best”. Certainly it might be possible to point out they have slightly higher test scores. It is possible it may be proven that a few more enrolling in MIT won competitive math awards. However for that to be true, we also have to assume all math students peak at 18 years old, then progress linearly at the same rates. One of the benefits of college is students improve themselves in all areas and at different paces - a student who never took part in prestigious math competitions might end up being the best math mind of his MIT graduating class.
Keeping in mind that the original objective of the USNWR rankings is to identify the best colleges for HS students to aspire to attend. While I respect and admire MIT as much as anyone, I don’t think we should judge its worthiness mainly on how many supposedly top math students choose to enroll there as 18 year olds.
The estimated “quality” of incoming freshmen probably has a place somewhere in the big picture of this calculation. However, it seems to me there are many other qualities much more important in deciding MIT is a better university than University X. Two things I deem hugely more important than the metrics of the incoming freshmen are the experiences/education during the 4 years, and the outcomes of the graduates. Not saying the qualifications of the incoming freshmen should be ignored completely, but that they matter less than other factors.
Of course. For that matter, an unheralded Michigan student could end up being the Tom Brady of the math world (seemed apt since he was an unheralded Michigan quarterback).
How do you separate the two? I’d argue that it’s impossible to measure the first for all students. For example, graduation rate means nothing to many academically stronger students, who will graduate in four years at any college, perhaps in the absence of some unexpected financial hardships. On the other hand, graduation rate may be more relevant to more marginal students. Ranking colleges based on criteria/parameters that may or may not be meaningful is the crux of the problem.
In some ways, a college is “better” if it has stronger students and faculty. People are, after all, the determining factor in making something better. But that may only be relevant to a small subset of students.
Again, I’d argue that the relative incremental improvements between two colleges are impossible to measure for a generic student. Some colleges may generate better outcomes than some other colleges for some students, but not for others. That’s why a good fit between a college and a student is so important.