2022 USNews Rankings posted

and yet there’s still brilliant, Ivy level students at probably nearly every flagship in the country - and there are kids that don’t even consider schools outside their state.

It almost dismisses them.

Why do we have to rank? It denigrates those majority who aren’t going to the top schools.

My daughter is at one Forbes rates in the 400s or 500s and got into top schools - and she’s getting her a$$ kicked this first year - lots of adjustments. So I think you can get rigor anywhere, etc.

Oh, I know why we rank. They do for selling things. And we all jones for it :slight_smile:

3 Likes

I’m not sure to which of my posts you are responding. I don’t disagree with much of what you wrote.

According to the common data set at https://www.williams.edu/institutional-research/files/2021/09/2020-2021_CDS_Williams_College_tuit-2.pdf , 77% of entering students at Williams did not submit rank. I’d be reluctant to assume 99% of kids are from the top quarter of class, if 77% of entering students did not submit rank.

Among kids in he bottom ~40% of the class in the previously linked Harvard Westlake documents, the admit rate for Williams was 8/18 = 44%. I’m sure HW is far from the only selective HS for which Williams admits high achieving kids in the bottom half of their competitive HS class, and these lower ranked kids don’t show up in the class rank stats since selective private HSs almost never submit rank.

2 Likes

If you saw that salary was weighted 20%, I’m surprised that you didn’t also see the text about how they computed salary. It’s a mix of Payscale and CollegeScordcard at multiple years out of college.

Like most of the discussed stats, salary can be meaningful if carefully analyzed, but if taken in isolation, it primarily follows characteristics of the admitted students rather than something special about the particular college. Two such student characteristics that influence salary are desired major/career field and academic quality of students.

2 Likes

Note that I’d referred to the top quarter, for which available comparisons become more evident. In the examples you suggested, Williams reported 1% from outside the top quarter on its CDS and Bates reported 18% (Wesleyan did not report this information).

It should be clear that I’ve been referring to information as available from Common Data Sets, People can draw inferences beyond this, but you have misattributed an assumption in what you have posted above.

The above is incorrect:

I also don’t think Test Optional schools should have the SAT or ACT scores of their incoming class impact rankings since those schools are blatantly gaming the test numbers of their admitted students by discouraging some students from submitting scores. The test scores of the incoming class can not be representative of that class if 1/3 or more of the class never provided scores.

2 Likes

Just the previous - but maybe should have started the thread.

btw - i’m in gainesville for work - i try to stay near campuses - missed the $28 parking …oops.

I walked all over. My daughter preferred UF but I largely preferred FSU which was more campusy. But I’m looking all over for the top 5 banners. See them online but walked a lot last night and saw none. Hmmm.

Rankings - lots of fun to debate.

And with USNews we can’t even compare Williams to a Florida - nor should we. Of course, with Forbes, Niche, etc. you can.

1 Like

Some schools, and maybe many LACs, ask all matriculants to provide a score upon enrolling (including those who applied TO). Schools still did this last year, but of course some students might legitimately have had no score to report.

The CDSs will reflect this full set of scores…Bowdoin, Amherst, Williams, Carleton, Swat, and Pomona do this, and probably a good many of their peers.

You will only see the scores of test submitters if you are looking at admitted student data (but that’s not the data the USNWR uses).

2 Likes

Well, it depends. Wesleyan (and, I believe Bowdoin) does require everyone who has taken an aptitude test to submit the results upon matriculation - whether or not they were submitted during the admissions cycle.

But, I agree with you that it’s a little hypocritical for the same posters who seem to attach such importance to the fraction of first-years who arrive with ranks based on h/s gpa but, don’t apply the same standard when evaluating TO colleges.

2 Likes

A college needs to choose between:

  1. Enrolling as many students as capacity allows, although each student will have a more crowded experience.
  2. Enrolling fewer students than its capacity, so that each student will have a less crowded experience.

UCs obviously choose 1, due to having to stretch limited state and tuition money to fulfill the mission to serve the state’s capable students. Rich elite private colleges choose 2, because lower enrollment makes them more exclusive and desirable, and students and parents paying list price expect luxury class amenities.

Of course, any college can incorrectly estimate yield, resulting in more or fewer students than planned.

2 Likes

Harvard Westlake has fabulous detail about its student outcomes. Does any other highly competitive school give such detail? With the 8/18 in the bottom 40% of the class, it shows that the CDS figures of top 10% is a bit overstated. Texas universities also skew the top 10% metric because they are admitted to the university without any test score.

Looking back at the US News rankings, I took the last two year average and compared it with the two year average from 2014-15. At least three new universities (Villanova, Santa Clara and LMU) have been added to the top 70 national mix since then.

Excluding less than five point changes, these ten had the largest percentage improvement in rankings:

  1. Howard (82 vs. 144)
  2. Florida (29 vs. 49)
  3. Florida State (57 vs. 93)
  4. UCSB (29 vs. 41)
  5. UC Riverside (86 vs. 113)
  6. UT Austin (40 vs. 53)
  7. Georgia (48 vs. 61)
  8. UC Irvine (36 vs. 46)
  9. Purdue (51 vs. 65) – +6 in 2015, +4 in 2018 and +4 each of last two years
  10. UMass (67 vs. 84) – +15 in 2015, +5 in 2019 and +6 in 2020. -2 each of last two years.

Top 10 percentage decliners (2020 was biggest collective drop in this group)

  1. Alabama (146 vs. 87) – -19 in 2019, -24 in 2020, +10 in 2021 and -5 in 2022
  2. Tulsa (140 vs. 87) – -19 in 2019, -15 in 2020, -22 in 2021 and +7 in 2022
  3. Yeshiva (72 vs. 48)
  4. Penn State (63 vs. 43)
  5. New Hampshire (140 vs. 98)
  6. Vermont (118 vs. 84)
  7. Miami OH (103 vs. 76)
  8. Clark (103 vs. 76)
  9. Nebraska (135 vs. 100)
  10. RPI (54 vs. 42)

Is “social mobility” driving most of the increases in UCs? UC Santa Cruz dropped to 100 from 86. Maybe something happened here relative to Santa Barbara? I assumed these attract similar socioeconomic students but apparently not.

The decliners are an eclectic mix from Alabama and Nebraska with big sports (but not Florida publics and UT) to Northeast publics like UVT and UNH (but not UMass) to smaller schools like Clark and Tulsa.

Many people will dismiss these changes as proof that the rankings are useless. I guarantee that the colleges are not dismissing the ranking changes, unless they dropped and then they’re privately trying to figure out how to improve.

This clearly was directed at me, but it’s false on two levels. (1) I have extensively analyzed SAT data (by applying algebra with stated assumptions) based on the percentage reporting. I just haven’t done it in this topic. (2) I haven’t attached “such importance” to the fraction of available information on HS class standing. I used it here as a response to information offered by another poster on public universities. Thereafter suggesting that information on class standing from Common Data Sets would be most useful for homogeneous comparisons (With the associated implication that the absolute numbers may not be fully reliable. Although I have not formed a meaningful opinion on this, and have left it to others to analyze. Nonetheless, those familiar with the data will note consistency across years, which tends to indicate a statistic that offers valid information, even if it may require further analysis).

I didn’t assert “top” schools are watering down their curricula and creating less rigorous “majors” in order to enhance their graduation rates. They have other reasons too. Higher graduation rates are one of the desirable consequences from their perspective. This isn’t restricted to “top” schools, of course. Vast majority of colleges do this. The trend of inflated grades, less rigorous majors and curricula has been here for awhile. It’s only accelerated more recently.

1 Like

While grade inflation has been a long term trend, what suggests that curricula are being made less rigorous?

It is the case that learning the same content takes less work now, due to technology. For example, looking up a library book used to mean going to the library and looking through the card catalog or something instead of doing a search on your computer or phone from home. But that does not mean that any less or less difficult content is being taught in class.

One of the recent examples I noticed is in the CS curricula of some of the “top” CS schools. They have removed the more rigorous components of CS foundational theories from their CS requirements, which are certainly some of more difficult elements of a CS curriculum and their removal havs made graduation with a CS degree at these colleges easier.

And, at least for CA residents and assuming no need based Aid, Cal is $40k all in and Harvard is $85k.

Just because the schools pay attention to them doesn’t mean they are useful. Does anyone believe those schools, either the risers or decliners really changed in a material way that much?

1 Like

Which particular courses or topics are you referring to?