The Everlovin' Undergraduate-Level University Rankings

Mathematical Ivy League closest match universities ordered by SAT scores

http://www.collegeview.com/search/ff8a1

  1. Vanderbilt
  2. Stanford
  3. Northwestern
  4. Rice
  5. Duke
  6. Notre Dame
  7. Georgetown
  8. Georgia Tech (public)
  9. Southern California
  10. Michigan (public)
  11. Berkeley (public)
  12. William & Mary (public)

Not that long ago, Vanderbilt had about 50% admit rate and SAT scores lower than even the publics on this list. Amazing what a few years can do. Northeastern, which is not on here, went from being ranked about 200 in USNews to about 35 or so in the same time period.

@Greymeer , Is there a reason schools like Caltech, Chicago are not on?

By my reckoning, based on CDS/USNWR/other SAT-CR+SAT-M numbers that are a couple years old, a more comprehensive version of @Greymeer’s list would look something like this:

SAT COLLEGE
1545 … Cal Tech
1515 … Chicago
1505 … Harvard (highest Ivy League scores)
1505 … Yale
1505 … Princeton
1500 … MIT
1490 … Vanderbilt
1485 … Washington U
1485 … Columbia
1480 … Harvey Mudd (LAC)
1475 … Stanford
1470 … Northwestern
1460 … Pomona (LAC)
1460 … Dartmouth
1460 … Rice
1455 … Duke
1450 … Penn
1445 … Tufts
1440 … Swarthmore (LAC)
1440 … Amherst (LAC)
1435 … Brown
1435 … Williams (LAC)
1435 … Bowdoin (LAC)
1435 … Carnegie Mellon
1430 … Notre Dame
1430 … JHU
1430 … Carleton (LAC)
1420 … Cornell (lowest Ivy League scores)
1410 … Georgetown
… DROP… [with at least 8 colleges intervening]
1385 … GA Tech (public)
1385 … Middlebury (LAC)
1385 … Hamilton (LAC)
1380 … USC
1380 … Michigan (public)
1375 … UC Berkeley (public)
1375 … Case Western
1366.5 … Scripps (LAC)
1365 … W&M (public)

(Same order as @Greymeer’s, but with LACs as well as more universities included)

This is a valuable list if you view it in context. For example, The difference in scores between MIT and Caltech is 45 points. No reasonable person would claim that Caltech is any better than MIT. The difference between Case Western and Cornell is the same and people would SWEAR up and down that Cornell was a lot better than Case Western and they’d site the scores as evidence. The difference between Middlebury and Bowdoin is approximately the same, but no reasonable person would say that one was tangibly better than the other.

I don’t think we can use test scores to judge academic quality at a school or, if we do, it’s only a very small part.

In terms of comparing schools, maybe a gap of 100+ points would mean that the school with higher scores would have a very slightly smarter and/or more knowledgeable student body, depending on how strong we think the correlations are between test scores and intelligence and test scores and knowledge.

It might tell us which schools are relatively more and less holistic.

But can we look at the ~50ish-point differences noted above and conclude that the school with the higher scores has better academics? Nope.

And @Mastadon, thanks for all that work on the NHS award study. That was interesting. @tk21769 and @Greymeer, thanks for the SAT info.

I usually just wait for the annual Business Insider list of the top 20ish schools, ordered by average SAT, to come out.

In compiling his list, Greymeer required that the school have a Division I football team. That’s why his list excludes schools like UChicago, MIT, Johns Hopkins, Caltech, and WashU St Louis, as well as all LACs. Of all the characteristics of an Ivy League school, I would think that its football team is probably the least significant (unless, of course, you are a football player).

@IzzoOne “Not that long ago, Vanderbilt had about 50% admit rate and SAT scores lower than even the publics on this list. Amazing what a few years can do.”

My understanding (from our guidance counselor) is that Vanderbilt and a few other schools made the decision to climb the rankings by, in essence, accepting almost every SAT score applicant, regardless of things like academic awards, extracurriculars, essays, etc. The counselor directs her “high test scores but no hook” kids to look at about a half-dozen schools, one of which is Vandy.

They aren’t the only one, but Vandy seems to have become more test score-conscious recently.

@IzzoOne Curious: What are the other “high test scores but no hook” she recommends?

Stanford could have a 1600 average if they wanted to, in fact, I believe a few schools on the list say that the average SAT scores of people that were rejected are higher than the ones who were accepted. Any list with Vanderbilt higher than Stanford is flawed, I’m biased because I live near the campus, but what major would Vanderbilt be better in than Stanford, a hard core humanities like English? STEM is all Stanford, SS is all Stanford, pre-med, pre-law, anything business related is Stanford. So Literature, History? Does anyone on this board think that a Stanford history major is in a worse spot than a Vanderbilt history major?

@pantha33m I assume that was directed at me, not IzzoOne.

I’m not absolutely sure anymore, but I know she mentioned Vanderbilt and Wash U. I think the others were Tufts, Notre Dame and USC. It does help to be catholic for Notre Dame as well.

I realize what the “high test scores, no hook” schools have done (or more broadly schools that manage their metrics to improve in USNews). And I have to say it seems to have worked. Vanderbilt and Northeastern, for instance, are compared to a completely different level of schools now.

This article on Northeastern is fascinating: http://www.bostonmagazine.com/news/article/2014/08/26/how-northeastern-gamed-the-college-rankings/

So @prezbucky , going back to your original intent on this thread, I’m not sure how you can figure out which schools (other than LACs) are really putting wood behind the undergraduate arrow. I saw someone cite IPEDS data earlier. You need to take all of that with a grain of salt. Here is a quote from a guy who was Provost at Hopkins and President of the University of Florida:

“Universities often report a number that appears to indicate how much the university spends on instruction. We might believe that this number accurately represents teaching expenses and even do some analysis based on that belief. We would be wrong to do so.” – John V. Lombardi, in How Universities Work

There was a faculty time study at the University of California a while back that came out close to 50% research, 25% graduate, and 25% undergraduate. Given that there are more undergraduates than graduates, undergraduates are going to get shortchanged from a time perspective at similar schools. In essence, they are subsidizing graduate study and research. You can see immediately how different this time allocation is likely to be from that of an LAC.

If I recall correctly, Cornell and Weill Cornell Medical are separate entries in IPEDS and spending per student at Weill Cornell is on the order of 8-10X more per student than at Cornell. That gives you some idea of how unevenly spending can be. The same applies for endowments.

So the problem is university finances are a black box. I have my views for some colleges that I think are doing something right and unique at the undergraduate level. But it is difficult to prove much. Perhaps Niche faculty ratings have some value.

It is unfortunate that rankings almost never take into account quality of teaching, which is what I expect from a college to whom I will hand over every dime I save. I think college rankings are useful in so far as they identify the top 50 colleges & universities high-achieving applicants ought to consider. Beyond that, each student should make his/her own list ranked by criteria important to him/her.

I can tell you from personal experience that while Harvard has many distinct advantages, undergraduate teaching is not one of them. This is probably true of many of the large, research universities that dominate the rankings. If there is one skill I developed at Harvard, it is the ability to teach myself any subject. And I feel sorry for the ignorant recruiters who have gone gaga over the name on my resume, thus passing over more qualified candidates for certain jobs. C’est la vie. Any current students or alumni who wish to engage in a debate on this, spare yourself the effort. I know plenty of alums who agree with me 100%, so we will have to agree to disagree.

One thing I have definitely learned from my daughter’s recent college search: repeat visits and overnight stays are eye-openers, if you can afford to travel. My D has 30+ unique college visits under her belt because we thought it would be a worthwhile investment (and it was, in her case). We are convinced that stacked ranking by magazines and websites within the top 50 is entirely groundless.

Given that there’s no such a thing as perfect ranking system, I think the current undergrad ranking methodology by the USNWR looks reasonable. For detailed explanation of each indicator, see: https://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings

Graduation and retention rates (22.5 percent): The higher the proportion of first-year students who return to campus for sophomore year and eventually graduate, the better a school is apt to be at offering the classes and services that students need to succeed. This measure has two components: six-year graduation rate (80 percent of the score) and first-year retention rate (20 percent). The graduation rate indicates the average proportion of a graduating class earning a degree in six years or less; we consider first-year student classes that started from fall 2006 through fall 2009. First-year retention indicates the average proportion of first-year students who entered the school in the fall 2011 through fall 2014 and returned the following fall.

Undergraduate academic reputation (22.5 percent): The U.S. News ranking formula gives weight to the opinions of those in a position to judge a school’s undergraduate academic excellence. The academic peer assessment survey allows top academics – presidents, provosts and deans of admissions – to account for intangibles at peer institutions, such as faculty dedication to teaching. To get another set of important opinions on National Universities and National Liberal Arts Colleges, U.S. News also surveyed 2,200 counselors at public high schools, each of which was a gold, silver or bronze medal winner in a recent edition of the U.S. News Best High Schools rankings. The counselors surveyed represent every state and the District of Columbia.

Faculty resources (20 percent): Research shows that the more satisfied students are about their contact with professors, the more they will learn and the more likely they are to graduate. U.S. News uses five factors from the 2015-2016 academic year to assess a school’s commitment to instruction. Class size is 40 percent of this measure. Schools receive the most credit in this index for their proportion of undergraduate classes with fewer than 20 students. Classes with 20-29 students score second highest; those with 30-39 students, third highest; and those with 40-49 students, fourth highest. Classes that have 50 or more students receive no credit. Faculty salary (35 percent) is the average faculty pay, plus benefits, during the 2014-2015 and 2015-2016 academic years, adjusted for regional differences in the cost of living using indexes from the consulting firm Runzheimer International. U.S. News also weighs the proportion of professors with the highest degree in their fields (15 percent), the student-faculty ratio (5 percent) and the proportion of faculty who are full time (5 percent).

Student selectivity (12.5 percent): A school’s academic atmosphere is determined in part by students’ abilities and ambitions. This measure has three components. U.S. News factors in the admissions test scores for all enrollees who took the critical reading and math portions of the SAT and the composite ACT score (65 percent of the selectivity score).

U.S. News also considers the proportion of enrolled first-year students at National Universities and National Liberal Arts Colleges who graduated in the top 10 percent of their high school classes or the proportion of enrolled first-year students at Regional Universities and Regional Colleges who graduated in the top quarter of their classes (25 percent).
The third component is the acceptance rate, or the ratio of students admitted to applicants (10 percent).

Financial resources (10 percent): Generous per-student spending indicates that a college can offer a wide variety of programs and services. U.S. News measures financial resources by using the average spending per student on instruction, research, student services and related educational expenditures in the 2014 and 2015 fiscal years. Spending on sports, dorms and hospitals doesn’t count.

Graduation rate performance (7.5 percent): This indicator of added value shows the effect of the college’s programs and policies on the graduation rate of students after controlling for spending and student characteristics, such as test scores and the proportion receiving Pell Grants. U.S. News measures the difference between a school’s six-year graduation rate for the class that entered in 2009 and the rate U.S. News had predicted for the class. If the school’s actual graduation rate for the 2009 entering class is higher than the rate U.S. News predicted for that same class, then the college is enhancing achievement, or overperforming. If a school’s actual graduation rate is lower than the U.S. News prediction, then it is underperforming.

Alumni giving rate (5 percent): This reflects the average percentage of living alumni with bachelor’s degrees who gave to their school during 2013-2014 and 2014-2015, which is an indirect measure of student satisfaction.

To arrive at a school’s rank, U.S. News first calculated the weighted sum of its standardized scores. The final scores were rescaled so that the top school in each category received a value of 100, and the other schools’ weighted scores were calculated as a proportion of that top score. Final scores were rounded to the nearest whole number and ranked in descending order. Schools that are tied appear in alphabetical order and are marked as tied on all ranking tables.

@TiggerDad- Your explanations are convincing. I have not invested as much thought or analysis into this topic as you have. However, I would like to explain why I do not give credence to this ranking.

If an institution is to be judged by the caliber of the student body or prestige, I can see how the rankings make sense to some degree. However, if that student body were transferred to another college that is better at teaching (hard to measure, of course, so hard to base rankings on), their outcomes might be even better. So is it the institution itself or the students that are at the heart of the institution? Moreover, what is the highest priority of a college, if not academic education, some would argue. You may say there is a lot more to a college education, and I would agree with you. But these fundamentally different views are cause enough to question the relative rankings of U.Chicago’s student body vs. Harvard’s student body. I would argue that U. Chicago’s students are in a more intellectually rigorous environment than Harvard students, who tend to be better-rounded. Who judges which one should be ranked higher? I also believe some of the quantitative data can be interpreted differently; I do not see how the data indicates that the institution is “better”. Also, subjective criteria based on intangibles for the purpose of rankings is totally bogus - no, it is downright deceptive.

For example, graduation & retention rates (22.5 percent) demonstrate that the students were able to take all the credits necessary to graduate within 4-6 years, and that students do not think the college sucks. That is certainly a good thing; I would not want to waste my money on a college where my kid cannot graduate on time or did not want to return. However, it does not tell me anything about the rigor of the coursework or the demands of graduation requirements. How did the mandate to take a rigid core curriculum affect students at certain colleges, for example? And it does not tell you that even a miserable student at Harvard will typically stay put simply because it is Harvard. (Sure, it happens, but how often?) I know someone close to me who hated MIT to the point of being suicidal, but decided to push through anyway because he did not want to disappoint his parents. How do you determine whether a college should be #1 or #10 based on these criteria?

Undergraduate academic reputation (22.5 percent): My favorite. It is like choosing a film based on the Tomatometer on Rotten Tomatoes. I have seen some sh***y movies based on critics’ reviews. What does this even mean? I would love to corner any president, dean, professor, and CLUELESS guidance counselor and ask how they came up with their favorites. Why do they see Haverford as superior to Bryn Mawr, for example? The subjective biases inherent in their responses would be interesting to elicit.

I would like to see the predictive model & methodology that USNWR, the self-proclaimed pundits of higher education, uses to come up with these great predictions.

Alumni giving rate (5 percent): Begs the question: why do alumni donate to their alma mater? Is it always satisfaction? There are alums who give to Harvard so that their legacy kids might get a leg up in admissions years down the line. They want their kids to go to Harvard based on various reasons such as prestige, not necessarily because they had an awesome undergraduate experience. It seems we, as a society, cannot get away from historical reputation in judging the quality of the education provided by an institution TODAY.

Finally, many would disagree with you that USNWR adequately weighs measures of outcomes. I think Forbes’ rankings are better in this regard. What about colleges whose students do not go into high-paying professions but contribute to society in meaningful, sometimes significant ways? Washington Monthly’s rankings is supposed to reflect the notion that some colleges are better than others in producing such students. Whether or not either ranking methodology successfully accomplishes its respective goals is open to debate.

USNWR seems reasonable on the surface but most if not all the metrics can be manipulated and has been to move up in the rankings. Schools that used to be early action are now ED, schools spend a lot of money right at the time the survey is done so they can say they spent a lot, the money spent on education at the university does not all go to undergrads, there is bias in the reputation survey - schools put down their rivals, or vote on schools they have no idea about. Higher ed people don’t like it - even presidents of universities that do well speak out against it.

It’s liked by the college admissions industry - professionals, advisors etc, and of course I have to concede, a lot of students and parents who love ordinal rankings. And as fan of Letterman’s top ten lists, I can see the attraction. But, it’s trash, it really is.

Maybe a takeaway is that rating things like teaching quality and outcomes correctly is easier said than done.

  • Teaching quality, because to be certain a group of people would have to sit in on classes at every school. And it would have to be a representative sample. So, what, 100 people, spending a day or two at each of the top 100 U's and 100 LACs. But then, what about the rest of the schools? Since this isn't feasible, I suppose we could take satisfaction surveys, though those could depend on which students responded (and which didn't...); or keep surveying the college officials. IF they took it seriously and only rated -- fairly -- the schools they're familiar with, that might be better than nothing. Maybe a combination of the surveys of students and officials would be... not perfect, because perfect isn't feasible, but better than what they have now.
  • Outcomes are problematic due to the facts that students operate according to preference and pay is different for the same jobs in different regions/cities. Even if we could control for major and location, we'd still be dealing with kids who don't always take the most high-paying path they could -- preference being monetarily irrational. Or, going to grad school instead of into the work force. Maybe we could assign an arbitrary earnings figure to grad school choosers. Of course, that would fall short of validity...

These are just two of the potholes we discuss. Probably the best ranking is the one each kid comes up with based on fit and cost.

@ThankYouforHelp -

Tufts is actually known for being on the holistic end of the spectrum on admissions. They had four supplemental essays and dropped back to three. They encouraged submission of supplemental material and videos long before video became mainstream. Tufts was the first school to be accused of yield protection because they rejected high SAT score kids who were admitted to Ivy League schools, but did not meet the holistic criteria. That is where the term “Tufts Syndrome” came from. They created the “Kaleidoscope Project” to formalize methods for measuring aspects of intelligence (such as creativity and emotional intelligence) not measured by standardized testing . The results from that project were presented in academic circles and Stanford was one of the only schools that expressed interest in the results (Sternberg, the originator of the project received his Phd from Stanford). Stanford now has essays that look a lot like Tufts’. Last year, Dartmouth created a new Vice Provost level position to lure away Tufts’ Dean of Admissions (who was the one who implemented the project) so it will be interesting to see if their admissions process changes.

https://www.insidehighered.com/news/2010/09/28/sternberg
http://archive.fortune.com/2010/10/20/pf/college/standardized_tests_college_admissions.fortune/index.htm
http://www.collegeconfidential.com/admit/tufts-admission-videos-on-youtube/

Vanderbilt has published research papers claiming that SAT scores are very good predictors of adult success. I don’t agree with the research, but it suggests that the high priority they place on standardized test scores for admission may be motivated by something other than climbing the rankings.

https://news.vanderbilt.edu/2016/06/02/study-confirms-link-between-early-test-scores-and-adult-achievement/

Regarding outcomes, I found this nugget from several years back. Parental advisory: explicit language.

https://youtu.be/YDhf9qwiA34

I thought we could benefit from some entertainment. hehe