Does Anomalous Standardized Scoring Information Indicate a Significant Lack of Diligence by U.S. News?

Based on some considerable experience a good portion of which includes some of the posters in this thread, I can say unequivocally that the answer to your (presumably rhetorical) question is, unfortunately, a clear “yes”. There’s also a 10,000 post thread in the making on this very topic, which would tend to support my response.

I do agree with your post, particularly your point that the schools can’t really improve or deteriorate that much from year to year. So following the rankings with the same zeal and interest as one follows their on-line health charts does seem to be a fool’s errand.

1 Like

As I wrote elsewhere - ranking systems are trying to create “One Rank To Rule Them All”.

That is an absolutely meaningless number. It is based on factors that are important to the person who created the system, or, more likely, to their bosses, and the weightings are those which they or their bosses find t be relevant.

Overall, the rankings are aimed at affluent families. The importance of reputation on college selection increases as the number of affordable colleges increases. More importantly, the need to determine “relative reputation” increases as the number of affordable college increase. Just as important is the “added value” for differences in reputation.

The percent of the population form the bottom 80% by income who attend an “elite” college is tiny. We’re talking maybe 1%, and another 4% who attend a “highly selective” college. For the great great majority of the lower and middle income (and I don’t mean that “I’m middle class with a $1.5 million+ home, two $50K+ cars, two annual vacations away from home, and live-in nanny”), colleges with a great reputation are the state or neighboring state flagships, a couple of big football colleges, HYPSM, and a couple of local “elite” private colleges.

A middle class family in Wisconsin probably have no idea about the reputations of Northeastern, Boston U, USC, Williams, Vanderbilt, or even Brown and Dartmouth, and, in many cases, are not even familiar with these. The “college recruiters” from these places will not come to their high schools, and the GC will barely know much either. They really aren’t looking at USNews to decide between Wisconsin, UIUC, Purdue, UMN, Northwestern, Chicago, Notre Dame, or OSU.

Ranking systems are for families who can afford a wide range of colleges, who live in a community which is also made up of families with similar SES, who are also familiar with a wide range of colleges, and whose kids attend a high school which sends large number of students to colleges which rank highly on these systems.

Finally, let’s be honest, people. the only rankings which are important are the top 100. Families who care about rankings want to send their kids to a college which is ranked in the top 100, and would prefer to send their kid to a college which ranks higher than #50. Anything below #100 does not provide any status, and therefore their actual ranks do not matter.

Bottom line - rankings were created to establish “relative prestige” among a large number of colleges. They are aimed at students and families which can afford a large number of colleges, AND are looking for a college which is a status symbol, AND don’t already have their own notion of which colleges are “prestigious”.

1 Like

Agree with everything you’ve said, but there are two important sets of stakeholders you didn’t mention- Alums (and the power of the purse) and the Board of Trustees which is- after all- the “boss” of the university president.

These two groups really do care about the difference between being ranked 40 or 44 (as ludicrous as that sounds), and really, really really care about who their peer group is designated to be if they are rated #125 or so.

Moving up or down really matters to them. Witness the endless gyrations of folks here claiming that their alma mater has “finally” been recognized as being X or Y. And trustees care. It’s how presidents of middling institutions get their contracts increased- with a raise- even if overall academic rigor is declining. A pop in the ratings due to whatever factor is heavily weighted that year? That’s money in the bank!

1 Like

Agree with almost all you wrote, but thought this was worth a call-out. I might add a few to the list everyone’s heard of, like say Duke and Columbia; but in the main you’re right - a lot of people don’t know (or care) that Rice and Wake Forest (purposefully excluding Vanderbilt here based on current behavior problems) are very, very, very good schools, and would never understand why you’d leave your local flagship to attend a Wisconsin or a Texas.

I think that families who hover over the rankings and argue about them incessantly are much more selective in their obsessions. I’d say Top 25. Those happy with Top 50 care, but are a little more practical and seemingly less obsessed about particular rank. UW, my home state flagship and alma mater, is a good example of this. Until now it’s been stuck in the mid- to high- 50s in US News, and yet kids in the PNW want to attend and don’t seem to care about #54. By the time you get to around 100, I think you’re dealing with people who want to got to a particular school for practical reasons or it’s just “their” school based on some of the factors you mentioned. Movements in the Top 25 crowd are the people who contribute the most, um, energy, to the experience. And you don’t have to look very far from where you are to find good examples.

1 Like

You’re right - I forgot about these.

1 Like

Thanks for your contribution to the topic. Could you say whether you make a distinction between the concept of rankings and their execution? It seems that Wesleyan’s rank may be dependent on incorrect information that USN did not verify through Wesleyan’s CDS or IPEDS. Irrespective of the value to you of rankings generally (as understandably presented here and elsewhere), might you nonetheless expect — or, at least, hope for — reliable statistical underpinnings to the rankings (with respect to simple, widely available information)?

Good question.

Overall, I really don’t think that there is such as thing as an objectively “best” college. Therefore, I think that any ranking system which claims to create objective ranking of “the best X colleges” has no actual validity.

I think that there is some more validity to “best” when talking about specific concrete features, but these are rarely useful, except for fun, since students and parents generally don’t make decisions based on single factors.

There are also rankings which are more meaningful and useful, which are based on how well colleges fulfill a specific need, but these rankings would be more amorphous and vague. So some colleges could be said to being doing this very well, and others less so. However, even in this measure, any supposedly “accurate” ranking from 1 to 1,000 are based on things like trying to accurately quantify qualitative factors or comparing averages that have so much variance that any differences are meaningless.

This is really he sort of thing that is done here on CC. Students ask for a bunch of colleges which are, say, medium sized, affordable and have a reputable nursing program (the need being “affordably teaching nursing while being the prefered size for certain students”). Posters will give them a list of ones to consider. That is useful, and takes into consideration that these colleges all serve the stated need, but don’t try to create some sort of meaningless and useless ranking within this group.

The type of rankings that I see as being really useful are those which can be customized to what the prospective students would like. It doesn’t actually give the “best”, but it will provide a group of colleges which, on paper, match what the student is looking for. These are, of course, only as good as the data used and depend on the student understanding the terms used in the list in the same way as the people who created the software.

Even in this case, the “ranking” will be inaccurate, meaning that #1 is likely not actually “better” than #4. However, any college within a range that the prospective student decides is worth a look. It saves a lot of time and effort, and makes college choice easier, and increases the likelihood that a student will find a college which will fullfill their needs.

3 Likes

Comments

  1. Since no compelling argument has been made otherwise, It seems that U.S. News reported inaccurate standardized scoring information for Wesleyan.

  2. This inaccurate information may or may not have been the same standardized scoring information used to compute Wesleyan’s rank.

  3. If inaccurate information was used to compute Wesleyan’s rank, then the “true” rank for Wesleyan plausibly could remain the same, at #11, or move to the next available slot, #15. See “overall scores” for why this would be the case.

  4. No suggestion that Wesleyan intentionally reported misinformation is intended.

  5. The reliability of U.S. News rankings, from a statistical standpoint, seems significantly diminished by this example.

1 Like

Seems to be of importance mainly for those who place a lot of importance on either or both of (a) the exact place a college has in USNWR rankings, and/or (b) SAT/ACT scores.

2 Likes

I’d say the topic’s main relevance connects to broader areas such as journalism and statistics. Regarding your comment on standardized testing profiles, it’s simply easier to see disparities in a characteristic such as this compared to disparities that may be present in more complex characteristics. Moreover, in terms of colleges that have intentionally fabricated aspects of their profiles in the past, they have commonly done this through inflation of their standardized testing profiles. As an example of this, the 20 to 40 points by which Claremont McKenna inflated its SAT profile (See https://www.washingtonpost.com/blogs/college-inc/post/claremont-mckenna-sat-scandal-more-at-stake-than-rankings/2012/02/07/gIQAHImVwQ_blog.html#:~:text=The%20misdeed%20attributed%20to%20now,went%20on%20for%20several%20years) is actually substantially less than the 80 point disparity in Wesleyan’s (presumably unintentional) inflation.

Wesleyan asks test optional matriculating students to report test scores, so they have 2 sets of scores – one for kids who submitted prior to being admitted and one that includes kids who reported after having been admitted. For example, Wesleyan has been test optional since 2014. If I look at Wesleyan’s 2019 CDS, it shows test scores for what appears to be ~100% of the matriculating class, in spite of being test optional with many not submitting scores at time of application. I suspect they report test scores of kids who submitted at time of application to USNWR (to not hurt ranking compared to peers), while reporting all scores available in CDS.

Bowdoin does the same and shows a similar pattern. For example, Bowdoin’s CDS reports a range of 1340 to 1520. Bowdoin’s US News page lists a much higher range of 1460 to 1560. Bowdoin’s CDS lists a much higher portion of students reporting scores than Bowdoin’s website says submitted scores. Bowdoin has been test optional for >50 years. Prior to COVID, Bowdoin’s website used to state, “all entering first-year students must submit scores over the summer prior to matriculating at Bowdoin.” Test optional admits were required to submit scores in the summer before attending.

This demonstrates the futility of focusing on small score differences between schools that have large and widely varying portions of students not submitting test scores.

Further confusing things, USNWR methodology says they are using SAT scores of entering students, while USNWR websites says they are listing SAT scores of admitted applicants. They USNWR questionnaire sent to colleges says “new entrants,” so I expect the website has misleading wording.

1 Like

I guess it never occurred to anyone to check Bowdoin.

Based on the information in its site, it appears that Wesleyan has reported (available) scores to USN for applicants accepted to Wesleyan. If this is the case, then Wesleyan’s reported profile majorly represents the scores for the 64% of its accepted applicants (class of 2026) who attended college elsewhere.

The Bowdoin example, which appears to show a significant departure from its reporting practices of just last year, seems to raise questions as well. The key aspects involved, however, may differ from those of Wesleyan.

A list of the sum of % reporting SAT + % reporting ACT as listed in CDS of for top 10 USNWR LACs is below. I doubt that it’s a coincidence that the 2 colleges that report scores for an abnormally large portion of students in the CDS are the 2 colleges for which there is a discrepancy between CDS ranges and USNWR ranges. How do you know that the difference instead relates to reporting for accepted rather than enrolled? In the USNWR sample year (fall 2022), admitted students had an ACT range of 34 to 35, which doesn’t match USNWR totals.

% Reporting SAT + % Reporting ACT in CDS
Bowdoin – 88% (during test optional 2019, total was 104%)
Wesleyan – 76% (during test optional 2019, total was 101%)
– Gap –
Carleton – 63%
Williams – 62%
Amherst – 61%
Swarthmore – 61%
Wellesley – 61%
Pomona – 53%

1 Like

Wesleyan itself shows sets of (higher) scores contiguous with this heading: “Profile of Students Offered Admission for Fall 2023.”* It shows different sets of (lower) scores under this heading: “Profile of First Year Students.”

*It appears that Wesleyan has updated its profile page, now representing the class of 2027, subsequent to the opening of this topic several days ago.

https://www.wesleyan.edu/admission/apply/class-profile.html

Well, that explains it. The USNWR Profile for Wesleyan is a closer match to the applicants admitted to the Class 0f 2027 than it was for the applicants admitted to the Class of 2026 and would not have been known in time to have factored into the questionnaire that Wesleyan filled out for The Methodology portion of the poll. This is what I suspected all along.

1 Like

Based on U.S. News guidelines, it appears that it’s intended that every school’s submitted information comports with the information on its CDS for the specific year in question:

While the information on standardized scoring for the vast majority of colleges matches exactly across data sets, in the examples of Wesleyan and, now, Bowdoin, this does not appear to be the case.

The USNWR school profiles are clearly prepared separately and apart from the “underlying data used to compute these measures”. The proof is in the footnote at the bottom of the USNWR profiles which does not comport with anything used in the data sets:

  • These are the average scores of applications admitted to this school. Ranges represent admitted applicants who fell within the 25th and 75th percentile.

The Common Data Sets do not have a subsection for “admitted applicants”, so whoever it was at USNWR that was asking the question wasn’t using the CDS as a script.

1 Like

As stated in my earlier post, USNWR states that they use Fall 2022 scores, not Fall 2023. The Wesleyan class profile page for fall 2022 shows a ACT range of 34 to 35 for admitted students. USNWR’s range doesn’t match – Class Profile, Admission & Aid - Wesleyan University .

However, the scores may match, if Wesleyan is only listing score for students who submitted scores at time of application, rather than including scores reported for test optional matriculating students who reported after being accepted.

Another way to look at it, is if Wesleyan is reporting scores for admitted rather than enrolled, then their scores should be far higher than peers. This is not the case. The USNWR score ranges for the 4 schools that share the same ranking as Wesleyan are as follows. Wesleyan’s scores are exactly the median of the 5 schools that share the same ranking. However, Wesleyan’s CDS score range would put the school in a distant last compared to peers. Wesleyan’s CDS score range is the outlier compared to peers, not the USNWR score range.

USNWR Score Range for LACs ranked #11
Barnard – 1450 to 1550
CMC – 1440 to 1550
Wesleyan – 1440 to 1550 (CDS reports 1310 to 1505, with lager % reporting than peers)
Middlebury – 1410 to 1540
Grinell – 1380 to 1530

Is it more likely that Wesleyan has by far the lowest test scores of all peer colleges? And Wesleyan lied to USNWR about their score ranges in an obvious way that nobody else has noticed?

Or does it seem more likely that Wesleyan has similar score range to peers and is reporting scores for a larger portion of students in the CDS, fitting with Wesleyan’s CDS showing a larger % scores being reported than peers? And Wesleyan is choosing to instead only report scores for students who submitted at time of application in USNWR to maximize their rating, similar to the strategy used by Bowdoin and others that request test optional matriculating students to report scores before starting classes (for statistical and class placement purposes)?

1 Like

Just to be clear, all the colleges you’ve listed (as well as Hamilton, btw) are test optional. The other way of stating your case is, that only Wesleyan University and Bowdoin are reporting all their matriculants’ test results on their CDS. This is what @cquin85 was arguing: