US News 2017 rankings

@marvin100
Haha. I had t-t-t too much time on my hands. Sans an adequate pacifying activity, sometimes I become restless. (I.E., work was slow so I was bored. :wink:

@moooop , if Caltech were to expand their non-STEM offerings, Iā€™d move them into my Tier 1, absolutely. They are great at what they do; they just donā€™t do much outside of STEM.

@prezbucky : So to clarify, you would not have an issue within your system with a ā€œfourth tierā€ school with higher entering standardized scoring than a ā€œfirst tierā€ school?

No, I wouldnā€™t. Admissions selectivity seems to be mostly a function of getting as many kids as possible to apply, and keeping the number of spots available static, so that a school can report a lower acceptance rate and a statistically (test scores, GPA primarily) more impressive class. And hopefully they arenā€™t accepted at a lot of prestigious competitors so yield is protected. That all has very little to do with the quality of teaching per se.

We know that some schools are more test score-sensitive than other schools and that poor yield can really drop the average test scores for an admitted class. If 4.0/1600 kids that got into that Tier 1 LAC were also admitted to, say, HYPSM, I imagine many of them would choose HYPSM, hurting the yield and dropping the average scores of the Tier 1 school. Itā€™s also possible that the tier 1 schoolā€™s test scores are lower because they reject highest-stat kids, in an effort to protect yield. Either way, you end up with admit stats that are not in line with the massive academic reputation of the school.

And maybe a lot of the kids applying for the Tier 4 school arenā€™t also applying to HYPSM and their ilk, so they are more likely to accept the offer of the Tier 4 school. Maybe the Tier 4 school offers merit aid or other incentives to woo superstar applicants.

There could be any number of reasons why a Tier 4 boasts better class stats than a Tier 1, but none of those measures or considerations has much to do with the reputed quality of teaching, program strength, or opportunities at a school ā€“ at least not in the short term.

Over time, an increase in apps may inflate budgets, so a school can improve its operational quality. Maybe that, plus good marketing, lands the school on Tier 1 one day. Maybe thatā€™s how a bunch of schools became known as top-tier schools in the first place: a popularity contest. But Iā€™d like to think it was based more on some original valuable output, some research breakthrough or great teaching method or innovative curricular feat that gave those schools the reps they enjoy. Or, as is probably the case, itā€™s from decades of multiple variables performed (or perceivedā€¦) at a high level.

Well done @prezbucky . I fully agree with your National Privates tier rankings. Within a tier thereā€™s almost no difference, yet each tier is incrementally ā€œbetterā€ in educational experience, IMO. I notice that USNWR agrees with you 100% except for swapping CMU <ā€“> USC (I agree with you giving CMU the bump up) and the omission of RPI (which I would see as certainly tier 6 and arguably 5).

I would put RPI in Tier 5 or 6 as well ā€“ thanks for reminding me. I forgot to include them. I owe RPI and William & Mary an apology. :slight_smile:

The rest are implied in the ellipses. hehe

@prezbucky RPI definitely in tier 5 - on par with LeHigh, BU, NEU overall and better in engineering/CS

I also agree that tiers is a better way to do rankings than trying to decide whether Stanford or Harvard is better.

Honestly, more than once I took two smaller tiers and merged them, and I am still tempted to do it. I waffled so much on this that itā€™s made me think I missed my calling and should have pursued politics. :slight_smile:

@prezbucky you could launch the Prez Rankings :slight_smile:

Arbitranksize=1[/size] :wink:

@suzyQ7 Iā€™m not sure about the other UCs, but I know that UCSD and UC Davis are very strong in certain fields. UCSD is very strong in biology and political science. UC Davis is strong in agriculture, animal science, and biology. Both universities are also very desirable and hard to get in to (very high GPA and SAT average).

I really donā€™t know why UCI beat them. Itā€™s not particularly strong in a certain field. Itā€™s not more desirable than UCSD and UC Davis, and itā€™s the easiest school to get into of all three (lowest GPA and SAT). I call BS on USNWR.

@emory323 How dare you challenge the wise and all knowing team at US News!!! :-B

All three schools are very close in rankings. For example, UCI has a slightly better graduation rate (slightly worse freshman retention) than the other two.

Another example, UCI does better in the ā€œFinancial Resourcesā€ section, than UCSD, due to it having a higher % of classes with 20 or few students, and a lower % of classes with 50 or more.

You really have to dig in the weeds to determine why UCI was slightly higher rated per US News.

@emory323 @Gator88NE We looked at the UCs for our son last year and from speaking to Californians (past and present) we had heard that the typical tiering of the UCs was

  1. UCB, UCLA, UCSD (this could also be in that order as 1 - 3)

  2. UCSB and UCD

  3. UCI and UCR

  4. UCM and UCSC

Is this not the conventional view of Californians?

I would like to have a conversation around this concept of gaming, now that we have had some time to process the new ranking

Here are some questions that I have

What do folks mean when they say a university is ā€œgamingā€ the ranking?

Do they mean that a university that should obviously be ranked lower is successfully manipulating some ranking metric to rise in the ranking, but the data they provide on these metrics is nonetheless accurate?

Do they mean the school is cheating and is providing false data?

What if the school ā€œfeelsā€ that it legitimately is a great school, is ranked inappropriately, accepted the importance of the ranking and worked to improve specific metrics important in the ranking. Are they then gaming the system?

Are they saying that any metrics that can be worked on and improved are by definition ā€œgame ableā€ and thus illegitimate?

Given that any metric can be improved on, is the complaint centred around how quickly a certain metric can be improved. So if one metric takes twenty years to improve vs one that takes five years to improve does that make the latter metric less legitimate?

If one institution decides that it will not put any effort to improving certain ranking metrics because it is either unwilling or unable to do so, is it a more ethical institution than another institution that gets in the field and plays the game and works to improve the metrics?

Or is gaming just a deragatory slight aimed at a school that an individual ā€œfeelsā€ in their gut does not deserve the rank it received

@CollegeAngst Great post that summarizes the ā€œgamingā€ issue perfectly.

@londondad in my experience yes, except that Iā€™d switch UCSC and UCR

On cross-admits Stanford prevails against every school but Harvardā€“and that gap keeps closing.
In the academic world (and that is for both the US and non-US) Harvard and Stanford are in their own top tier.
Much2Learn has it about right.

@collegeangst has raised several great points that deserve to be discussed here and are much more applicable than a discussion of UCs which belongs in a more specific thread.

I think it is beyond cavil that universities are definitely working to improve their standing in the metrics that are being evaluated by US News. For better or worse, the US News rankings are the pre-eminent rankings and they cannot afford to be ignored. In my business career we used to say that people perform as they are being measured ā€“ they pay close attention to measurement metrics, sometimes to the detriment of other matters.

If a university is measured by US News as to its acceptance rate, by its freshman retention, by its four year graduation rate, by its SAT scores, it is going to focus on improving those metrics. That is not a bad thing if everyone agrees these are valid metrics. And it is certainly fair to add or adjust certain metrics. However, I see three problems:

  1. SATs - This has been a fertile ground for schools to game. Do they report SAT scores that are taken at a single sitting? superscored? Do they report just students accepted in the initial round? Do they include students accepted off the wait list? Do they report just students enrolled? Do they employ other strategies to calculate? I suspect that this metric is driving all schools to seek self-serving presentation modes.

  2. Acceptance Rate - This metric appears to be driving an insane lust at some schools to drive up the number of applications regardless of quality. If a school gets lots of applications from students that are ā€œreachā€ applicants or even totally unqualified, it actually benefits the schoolā€™s selectivity score. Among the elites, the University of Chicago and Washington University seem to be the most flagrant at this technique.

  3. There are many unmeasurable intangibles that go into making a great school. We cannot lose sight of these in evaluating schools. They are also what make one school great for person A and totally wrong for person B.

Iā€™m sure there are a lot of other considerations here, and I look forward to hearing from others.

As far as I can tell, Stanford has NO academic weakness. Even Harvard is a bit lacking in Engineering compared to its lofty ranking.

@andlmink - they report superscored SATs or composite ACTs. Best scores. Schools that are test optional or test flexible get a nice bump in this metric because students will suppress their SAT/ACTs from the school if they are not good and the school doesnā€™t have to report them. Therefore, schools are only showing the very best scorers of their applicant pool. Schools with deferred admit policies (like sping admit- first semester abroad programs or deferred admission) get to suppress all those scores since those students are not enrolled as of the fall of their freshman year. Also, schools report statistics for ** enrolled ** freshmen students on the CDS which is used in US News.

@wandlmink

Acceptance rate is weighted 1.5% of the total score. A big drop in acceptance rate will have little or no effect on the ranking.