New Ranking Based on Cross-admit Data

Using Parchment’s cross-admit data, I’ve re-sorted USNews’ top 20 colleges based on where students supposedly actually go. Note that there is a fairly substantial margin of error on Parchment’s information, so all of this should be taken with a grain of salt. Factors that necessarily throw off this analysis are regions, ED, and focus on demonstrated interest.

Whether or not students who qualify for multiple colleges end up choosing one or the other is an important metric. The majority of these matchups have at least 3/10 students choosing the “losing” school.

Here are the top 10 by a format where magnitude of victory is a factor:

  1. Harvard
  2. Stanford
  3. Yale
  4. MIT
  5. Princeton
  6. UPenn
  7. Duke
  8. UChicago
  9. WashU in St.Louis
  10. Caltech

When done in tournament format, Stanford replaces Harvard, MIT replaces Yale, Duke replaces UPenn, and WashU replaces UChicago*.

See this spreadsheet for the matchups**:
https://docs.google.com/spreadsheets/d/1N_UaXKbz_JsOdUBDUYVPgJC_rIk_zzHUXO7_lJQUaPk/edit?usp=sharing

*0 for a loss, 0.5 for a tie, 1 for a win, then by degree of victory

**And the other 10 schools, which I left off because using 20 creates a lower bound.

its not loading

You need to replace the **** with docs[dot]google[dot]com since it’s a Google Sheet

You forgot to mention that Parchment data is ENTIRELY self-reported and not vetted.

Last year, I created a Parchment account just for the purpose of seeing how it would predict admit probabilities. I used our dog’s name to register, and then I changed different variables in the dog’s profile/stats to test the impact.

GIGO: Garbage In, Garbage Out

woof!

The cross-admit data is not based on the public student-submitted data. Parchment is a transcript sending service, and the data for the cross-admit comes from the high schools that subscribe to it. High schools that use it apparently offer their students the option to release that data to them. According to them, about 30% of US high schools use it.

“GIGO”

Hehe! GMT, you must be a consultant!

This “ranking” is irrelevant. Harvard, MIT, Princeton, Stanford and Yale compete against each other and appeal to the vast, vast, majority of high school students more than any other university. Beyond those 5, all other top universities will appeal to their own niche, mostly in equal measure.

It’s true for all rankings that they take subjective weights of subjective and objective factors. However, this is fairly indicative of where people actually want to go. If you look at the 20 according to this ranking, if you pit almost any lower number against any higher number, most people who have the choice choose the higher number.

There are only two exceptions in the top 10, where Caltech beats Duke (67-33) and WashU beats UPenn (53-47*). Of the total 20 schools, over 95% of the matchups fit this sort.

Yes, they satisfy niches, but students often fit multiple interests, and if all of them were equal, you would expect all of the schools to be ties or statistical ties.

*[This is why using many schools is important. While this is statistically a tie in this matchup, the law of large numbers gives us more confidence overall with a tournament or percent matchup.]

collegerankera, it is actually not that indicative at all. It merely serves as a reminder that high school students are influenced by the rankings. But who cares about what high school kids think? Their opinion is not reflective of what academe, graduate school admissions committees or corporate recruiters think.

You’ll quickly see that these don’t match up with other rankings. Harvard is above Princeton, Stanford is above Yale, Columbia doesn’t even make the top 10, MIT and UPenn are also substantially higher. Compared to USNews, only Yale is in the same place.

But why does the direction of high school kids matter?

  1. Students qualifying for these schools probably consider many factors, largely unique to each student. This represents an aggregate of those factors considered.
  2. Someone who qualifies both for Stanford and Harvard is probably going to be in a better position than someone who qualifies for only one, on average, so Harvard is losing some of their top students to Stanford, which negatively affects their admitted class.
  3. Where students want to go is just another factor to consider. Rankings are normally fairly obfuscated and it’s hard to see where each school stands on everything. This is one cumulative and straightforward metric.

If a student got into Princeton and UPenn and was unsure of where to go, one factor most students would consider is their rank. If one looked at USNews after you had gotten into Princeton and UPenn, one would be dissuaded immediately from going to UPenn, But really, based on where students actually go Princeton and UPenn are quite close.

Maybe this is affected by Wharton, or differences in merit scholarships or aid, or something else, but I think it’s a factor that is helpful and straightforward, that is more useful than directly comparing acceptance rates or yield or what employers might claim they prefer.

Consider reading:
http://www.nber.org/papers/w10803 (A REVEALED PREFERENCE RANKING OF U.S. COLLEGES AND UNIVERSITIES Christopher Avery Mark Glickman Caroline Hoxby Andrew Metrick) and

http://www.nber.org/papers/w15772 (STUDENTS CHOOSING COLLEGES:
UNDERSTANDING THE MATRICULATION DECISION AT A HIGHLY SELECTIVE PRIVATE INSTITUTION Peter Nurnberg Morton Schapiro David Zimmerman)

“If a student got into Princeton and UPenn and was unsure of where to go, one factor most students would consider is their rank. If one looked at USNews after you had gotten into Princeton and UPenn, one would be dissuaded immediately from going to UPenn, But really, based on where students actually go Princeton and UPenn are quite close.”

Any student getting into both left US News and World Report rankings in the dust long ago. Having watched tons of students make this kind of decision, I can tell you exactly what they do. They visit friends at each. They read the papers of faculty in areas they plan to pursue at each college. They look at available resources. They consider the two towns. They ask their teachers. etc.

I agree completely. My point is really that the choices of these students can be more reflective of the colleges than just rankings, and they clearly look into more factors than just that. The first paper is interesting. I noticed that regional factors played a role, which it emphasizes.

right. also interesting is the decision making on the part of the schools in terms of acceptances and the Tuft Syndrome strategies.

I see a problem with any ranking system that ranks the University of Waterloo at #10, listing a total of zero enrolled students. I agree with GMTplus7, GIGO.

http://www.parchment.com/c/college/college-rankings.php

Any ranking where a school can move up or down 400 places in a single year is total garbage. There’s no way students are suddenly choosing Skidmore, #147 in the 2016 ranking, over Hamilton at #340, in droves or Knox College, ranked #175 in 2014 became so unpopular a year later that it fell to #575 in 2015 choice matches then in 2016 improved so vastly that it deserved to jump 478 places to #97. According to these rankings Bates, which improved in every measure of selectivity last year, including yield, fell in student matches from #22 to #284. I remember making the same objections last year, noting that no matter how much I love Bates, my kids’ school, I had a very hard time believing that students were choosing it over Amherst, which according to this ranking it bested by 5 places.

Adding: When your #54 ranked college (Lincoln) has a freshman retention rate of 43% you have to know your cross-admit data is off.

I am not talking about Parchment’s ranking system. I completely agree that it sucks.

The methodology Parchment uses to make their “ranking” is badly flawed and quite different from what I do. Elo is not a ranking system that makes sense where every combination is known, but that’s only the case for the top most applied colleges. Trying to use Elo compounds the problems with regions and is inherently broken because the matchups are unbalanced and they have few data points.

Basically, what they’re doing is they’re simulating a bunch of random tournaments and sorting colleges based on how often they win. If they have only one or very few matchups between one college and another, then you have stupid results like the Waterloo one. Their method actually adds extra uncertainty, which is obvious from Waterloo being in the top. It also means that if they have more matchups for a school, that has more weight. For example:

(Hypothetical)
Stanford vs Harvard (100 matchups): 90% Stanford
Harvard vs Princeton (1000 matchups): 40% Harvard
Princeton vs Stanford (10000 matchups): 60% Princeton

My method would say Stanford > Princeton > Harvard (Or by my win-based method, P=H)
Their method would say Princeton > Harvard > Stanford

Parchment actually has no data for Skidmore vs Hamilton. I have no idea how they did Waterloo as number 10 when they have almost no data on it and 60% choose Berkeley over it. They say Berkeley is a peer college to Stanford, which is funny considering 87% choose Stanford.

Basically, their methodology for the ranking is bad, but the data overall is still quite usable if you’re comparing a group of colleges that all have matchups to each other.

That highlights my problem with their data. Parchment lists Hamilton as one of the three peer schools for Skidmore, yet they have no match ups for the two schools? Probably half the kids who tour one tour the other. Hamilton has printed directions to Skidmore at their admissions office.

The OP’s top ten may have enough match ups to make the results at least somewhat valid. I just find it very, very difficult to trust Parchment’s raw data, particularly when they don’t disclose the number of applicants they’re using to construct their match ups.

In 2014, they released the data with counts in the NYT:
http://www.nytimes.com/interactive/2014/09/04/upshot/college-picks.html

For 2016, they say “757,491 acceptances” vs. “104,119 students” in 2014, but I’d guess since the average number of applications per student is about 4, that number has probably about increased by 50%*

Based on those, I’d guess the number is about 20-30 per top college matchup.

*http://nation.time.com/2013/05/01/as-college-applications-rise-so-does-indecision/

I would add that since the data is self reported, schools can stuff the ballot box. I am pretty sure at least one of them does this.

You can easily figure which one out for yourself.

@Much2learn The data isn’t self-reported (This is separate from the online self-reported data*), although there could be a selection bias coming from whether or not students allow their high-schools to share the data. The main bias is regional though. The Great Lakes states are substantially overrepresented in their sample, so that explains much of the preference for WashU.

*http://talk.collegeconfidential.com/discussion/comment/19118943/#Comment_19118943