Why aren't universities' SAT/ACT scores a bigger part of the US News ranking?

@privatebanker Well hopefully it’s more clear now, that I have provided you a link to ENROLLED statistics for Oxford.
Oxford is a small campus of 400 freshman, so the midpoint will not be affected much as the medians for both campuses are very close, as you can see.

Boy we are splitting protons here, forget about atoms. So it goes both ways…

@privatebanker From what I’m looking at, statistically, a 34 looks closer to a 33 than a 35. Maybe I didn’t get enough sleep last night and I’m doing my math wrong.

@emorynavy Agreed w privatebanker. Roughly 1/4 of the top 1% test takers were those who scored a 34.

Score#students_% of All Test Takers

36___2,760_0.136%
35
12,3860.610%
34
20,499
1.010%
33
26,920_____1.326%
"In the class of 2017, 2,030,038 students took the ACT

It would be great to get back to the original point of the thread, which is the relative value of test scores vs the “soft” piece (GC/peer ratings) when computing the USNWR ratings.

I think it’s been established that the top test scores are very rare. So IMO they are still useful in computing the total rating and should carry more weight. Yes, they are imperfect, but so is every component used to calculate the ranking, especially the peer ratings (for the reasons I touched on in my prior post). For every component used, one could say it captures something but misses something else. Or captures something imperfectly. Etc

I agree with @SoCalDad22 who said “Then why do many colleges list “standardized tests” (along with GPA, course rigor) as the most important factors for admission on their common data sets. IMO, there certainly is some correlation to scoring high on the ACT/SAT and being smart enough to fair well in college. If colleges put some emphasis on test scores, the rankings should also.”

@emorynavy

Thx. That’s helpful. I was referencing 2022 in my earlier posts looking at this years stats.

@waitingmomla Thx. And agreed.

Back to the regular programming.

“And you must have attended one of the best schools in the country if 34 is a common score. Only 20k outnof 2.1 mm test takers around the globe scored that last year. And there’s 37k high schools in USA alone.”

You have to include SAT test takers as well right or actually remove them from the 37K high schools? There are close to 2M SAT test takers, maybe a little less. So the ACT would be serving around 20K high schools, with the remaining 17K being served by the SAT. And the competitive high schools in the bay area, at least the public ones, a 34 would put you at 88-90th percentile.

Being rare does not mean they should carry more weight. The important part is whether that rareness is highly predictive of whatever USNWR is trying to measure. For example, the most recent math SAT had relatively easy questions, so they needed to apply a harsh curve. A single careless error dropped score down to 770. 3 careless errors dropped score down to 720.

It’s rare for students to answer all the short and simple multiple choice algebra/geometry questions without making any careless errors, while also going quickly enough to finish in the time limit; making 800 scores somewhat rare (it’s common for ~20% of students who apply to highly selective colleges to score an 800). Are those students who score an 800 much better prepared than the ones who make a careless error or two and gets a mid 700s score? Is that one or two question difference important to whatever USNWR is trying to measure with SAT scores, which they claim they are using to measure the college’s selectivity?

If I was trying to measure a college’s selectivity, I would think the SAT score of the applicant pool would be more relevant than those of the entering class. You could use SAT score as way of controlling acceptance rate, so a 10% acceptance rate among an applicant pool of mostly self selecting high scoring students would indicate a greater selectivity than a 10% acceptance rate among an applicant pool of more varied students. Unfortunately this information is not published. Nevertheless, if I was trying to measure selectivity my primary criteria would not be test scores, like USNWR uses as their primary criteria to estimate selectivity.

@theloniusmonk Good point. It’s still over 2mm test takers. And perhaps international too.

If 5000 schools out of the 20k you referenced have 4 each. That’s all of them. So not common in the other 15k. And I doubt the distribution is that concentrated. I think it’s rarer than we believe. And I’m talking composite. Not super score.

@Data10 “A single careless error dropped score down to 770. Are those students who score an 800 much better prepared than the ones who make a careless error or two and gets a mid 700s score?” – Agree completely. This very thing happened to my D when she took the math years ago. 1 wrong that was a careless error and she got a 770. She was furious, lol. She scored a 35 on the ACT and submitted only that with her apps.

“Unfortunately this information is not published” – If you mean mid-50 ranges for accepted students (rather than enrolled students) I think for some schools it is available. I know it’s not in the CDS, but some schools publish accepted student mid-50 ranges on their websites or in press releases, etc. If someone wanted to find it for a select group of schools.

I agree with most of what you said, there are definitely shortcomings to the test score piece. My point was just that that could also be said for other components as well. Especially the GC/peer review, which I think is deeply flawed for something that is so big a piece of the pie. (Outlined well in this article https://www.washingtonpost.com/news/answer-sheet/wp/2017/09/12/the-problem-with-the-2018-u-s-news-rankings-junk-in-junk-out/?utm_term=.642eb7d52dfb ) I personally just don’t understand why test scores (while flawed) are only @8% of the total, while GC/peer review (also clearly flawed) is 22%. But that is just my opinion, one of many I know. And clearly not shared by USNWR.

@emorynavy and @privatebanker : The differences between ECAS and Oxford stats are no longer relevant and nor would Oxford count towards Emory’s in the rankings. Even if they were, maybe only the admit rate would go up slightly, but Emory’s overall admit rate has always been higher than peers minus CMU I think, so it wouldn’t make much of a difference.

@bernie thx. This was about ranking and using the act to move up or down the list. The ship has sailed. Emory wasn’t the object of the discussion just an example. But good info anyway.

@privatebanker :

I had more and an opinion on that.
*Also, some schools Do NOT put standardized test scores in the top category. The person suggesting that rankings should follow what the schools value in their admissions is problematic because many of the schools prioritize the standardized testing above all else BECAUSE of the rankings which value them directly AND indirectly. Do you seriously think administrator and counselor ratings come from some nuanced knowledge about the actual school, its curriculum, and performance? I bet many just look at upward mobility in selectivity and other surface level factors that are easier to see and then make a judgement so some of the metrics including the “reputational”(how do others feel-basically the peer and other rating which counts for a good chunk) component interact with other metrics.

@Data10 : My problem with even valuing incoming statistics (especially scores) when trying to measure selectivity of already very selective schools is when some schools try to continue to jump in the rankings deliberately cherrypick higher scores, say VU and WUSTL, but then the post-grad. performance and collection of prestigious post-grad awards does not measure up to other schools putting in the same statistics. Those schools were putting in Harvard and MIT level scores, but are for example, putting out Emory level outcomes. One would think that these places with so many more top scorers would also be CONSISTENTLY beating somewhere like Emory or elites significantly below them score wise very badly in Fulbrights, Rhodes, etc since the rise in scores and this appears not to be the case. I guess it is just hard to figure out what else is being selected for other than the stats. I seriously Emory certainly is not regarded as better by these agencies in terms of reputation (like maybe someone from some of the top 10 schools, especially HYP, would actually be more likely to impress by virtue of doing well there in addition to other potential biases that may have to do with such schools traditionally dominating in placement to those fellowships). It is hard to look past those incoming stats because they are different. But when they are/were dramatically different (apparently as of 2021 Emory and VU are less different on the new SAT than on the old), I have to wonder what all those additional top scorers at the others are doing with their “talents”. I tend to like to look at front end versus back-end. As in, who is getting a decent return on the high achieving pool they admit. I just wonder if it is possible to yield diminished returns on constantly increasing selectivity beyond a certain point via these metrics (say a mid 1300 on the old SAT or maybe a 31-32ish ACT and perhaps a 1400+ on the new and same ACT). It just looks like increasing them is not doing much for some schools other than wowing people who read their admissions page.

And honestly, looks like scores specifically may eventually converge at the top schools (new SATs will converge faster than ACT I guess). Admit rates will simply reflect popularity of the schools which is not purely governed by a perception of “quality”. Marketing tactics of these schools are not necessarily only selling academics and other professional development resources. Some are straight up selling a social scene, sports, raw prestige, and tradition, and other things that if done well or not will influence application volume. I would rather the rankings be less responsive to changes in selectivity especially among the already selective schools. Maybe set a threshold whereby you weight changes less or something or weigh it based upon brackets. Like a school making a lot of changes such that over a decade or less they go from a median of 1200 to 14something is maybe more indicative institutional change and development than schools already hovering near 1400 going to 1500+. Those schools already have it made and other changes in the quality of resources and curricula at the schools are less likely to be governed by upward mobility in selectivity. It would simply be “a good look”. At that point, the endowment size and allocation is driving much of the change. I feel like the top 25-35 or so bracket should just kind of be treated differently when assessed.

For example, Chicago should have never had to switch to its “new” admissions scheme to get to where it is in the rankings (and now those employing a similar scheme look as if they may hit a ceiling). It was already substantially stronger than in most respects than programs with higher stats and rank back I guess before 2007 or 2008. They clearly played a game to ensure that the rankings recognized that. However, today, other selectives are playing the same game hoping it yields the same results but I bluntly do not believe that most of those places are near the level of the top 10 schools they have surpassed or caught in stats academically, reputation, or resource wise. An USNWR overly responsive to ACT or SAT would allow them to create such an illusion on the surface.

Most of the top schools themselves say that above a certain threshold, small differences in standardized test scores just don’t matter very much to them—certainly much less than most applicants seem to believe. They may list test scores as a “very important” admissions criterion in their CDS, but most list many other factors, academic and otherwise, as equally important.

There’s a good reason they don’t fixate on small differences in test scores. Studies have shown standardized test scores to be at best a weak predictor of academic success. HS GPA alone is a slightly better predictor. HS GPA controlled for the rigor of the HS curriculum is probably an even better predictor, though I haven’t seen data on that. GPA plus test scores is a better predictor than either factor individually. But once you’re above a certain threshold in GPA and test scores, pretty much everyone is capable of doing the work. Fully one-quarter of Harvard’s entering class has an ACT score below 32 or an SAT score below 730 on at least one section of the test. Not because that’s the best Harvard could do if it really mattered to them; it’s because it doesn’t really matter to them. They admit students who they think will succeed and will add something to the life of the school while they’re there, and almost without exception their students do succeed, even if some have slightly lower ACT or SAT scores.

The schools that place the greatest weight on test scores generally fall into two groups. Schools that don’t use holistic admissions rely on test scores and GPA because it’s a simple numerical tool that lets them quickly and cheaply identify students who are likelier to succeed. Generally in that territory are schools that aren’t getting the highest quality applicant pool, or are processing such large numbers of applications that they can’t afford to place great weight on other factors. The other group consists of schools that are trying to game their US News rankings. While it’s true that test scores count for less in the overall ranking than some other factors, it’s actually much easier to move the needle in the overall ranking by improving test scores than by changing GC and peer rankings, which is almost impossible. How? Change your admissions policy to emphasize test scores, especially those that will marginally improve your medians. Go test optional, so generally only those applicants with high scores will submit them. Superscore. Use merit scholarships aggressively to tweak your test score medians. Offer deferred admission to students with weaker test scores, enrolling them in the spring semester where their test scores won’t count in the reported medians which reflect fall enrollment only. Enroll a smaller freshman class and make up for it by enrolling a larger number of transfer students, since their test scores aren’t counted, either. You can even tell some of the first-year applicantsyou deny that if they do reasonably well at another school in their first year, they have a good chance of being admitted as a transfer for their sophomore year. Do any of these things make the school better? Not really, but it could boost your reported test score medians, and in combination with other factors it might also boost your US News ranking.

@bclintonk : Bingo! And again, people are acting as if some of the other metrics do not interact with the “selectivity” metrics. They act as if they are all just independent which is weird. You’ve got to be kidding if you don’t think peer rating and counselor ratings do not budge when they are noticing an already highly selective school becoming more “selective”. Again, it is a shortcut from having to actually think and investigate about institutional health and other things the schools are doing which may be laborious and is too much brain power to make a simple contribution to a rankings agency. The most superficial of optics can drive a lot of things with regards to these rankings especially when one is trying desperately to use numbers to split hairs among very selective schools.

As for high application volume: I feel like that is a big issue for schools “on the come up” trying to rush to a much higher rank. They may not have the most robust UG admissions staff to deal with heavy volume so being stats based not only makes them look great and gets them far, but is indeed easier as you say. To me, the top 25 or so schools really don’t have much of an excuse because exhorbitant amounts go into forming robust admissions, marketing, and communication teams. When I see an elite private suddenly start emphasizing stats so much that the scores are high for the relative caliber of the UG programs, I suspect that it is a choice and not a necessary evil (unless more increase in the already high rank is necessary).

This is all remeniscent of a thread I started a while ago questioning the benefits of rankings. There were some convincing arguments made that rankings provide a good place to start looking, especially for people who are looking from out of the country or even from different parts of the country. I understand this, but I still think they can do a great deal of harm.

The problem, as others have mentioned, is that many of the metrics are arbitrary or unreliable. Some of them are probably self reinforcing – the GC perceptions are influenced by the rankings and vice versa in a never ending feedback loop. The weightings given to any particular section are also purely arbitrary. Who decides that GC perceptions are given 25% or 23% or 10%? Minor changes in the weightings could lead to major differences in the rankings.

I also dislike the entire concept of conflating all of these different metrics into a single number and using that number to decide what school is “best.” When my daughter fell in love with a couple of lower ranked schools I was worried. Being the obsessive person I am, I forked over the extra money to be able to dig further into the rankings. It turns out that her schools were ranked very highly in the elements she actually cared about and lower in things that were irrelevant to her. For example, the rankings assume that smaller class size is better. But my daughter, and a few others I know, thrive in a larger setting. What is better for others is not better for her. There are things that are not even part of the rankings that we care very much about — for example, tangible outcomes like acceptances to graduate programs and employment.

All of this information is available for people like me and my kids who are willing to look outside the rankings. However, for many others, all they can see is that number and are paralyzed by it. How many kids come here desperate to get into a “top 20” school or mordified to think about going to a school outside of the “top 50.” That arbitrary cutoff causes a lot of missed opportunity and a lot of emotional turmoil.

My suggestion is for each family to make their own set of rankings. If high test scores are important to you, then by all means rank by that. But don’t be afraid to use manhy other metrics, both published and not.

Yes, 22.5% of the USNWR ranking criteria is based on graduation and retention rates, which are mostly associated with student selectivity. So generally more selective schools will get a boost here. Note that graduation rate performance (i.e. graduation rate in relation to student characteristics) is a separate factor but only 7.5% of the total.

However, the USNWR student selectivity criteria, also 22.5%, is mostly based on SAT/ACT scores. So there is certainly some incentive for a ranking climber school to emphasize SAT/ACT scores over other admission criteria, even if the indirect effect on graduation and retention rates is less than if it managed to increase student selectivity in a more “balanced” way. But since there is still some correlation between higher SAT/ACT scores and graduation and retention rates, it is still advantageous in both ways for a ranking climber school to emphasize SAT/ACT scores in admissions, rather than attempt to increase student selectivity in a more “balanced” way.

USNWR ranking criteria: https://www.usnews.com/education/best-colleges/articles/ranking-criteria-and-weights

@ucbalumnus

It always puzzled me that this metric is not weighted considerably higher than overall graduation rate. It seems to me that schools should be more highly rewarded for doing significantly better than expected for the population they serve. Again, this is why I find the rankings themselves less than useful.

I just think asking educators, who are many times basing their view of “prestige” on hard wired perception of 20 or 30 years ago, is useless.

And does create the vacuum-sealed feedback loop mentioned in a previous post.

There will never be any movement in the top 15 schools with this weighted at 22.5 percent. I think it’s ludicrous and next to meaningless actually.

Most GC at public schools aren’t even involved in the college process.

If you want to capture colleges’ treatment effects (i.e. how attending the college affects student success) rather than selection effects (i.e. how student success results from how the students are selected), a greater emphasis on graduation rate performance would be warranted.

However, if the rankings are designed to reflect “conventional wisdom” about college prestige, then actual treatment effects (as opposed to what people believe to be treatment effects but are actually mostly selection effects, like raw graduation and retention rates) may not be a priority when determining the ranking criteria.

@bernie12
“You’ve got to be kidding if you don’t think peer rating and counselor ratings do not budge when they are noticing an already highly selective school becoming more “selective”. Again, it is a shortcut from having to actually think and investigate about institutional health and other things the schools are doing which may be laborious and is too much brain power to make a simple contribution to a rankings agency”

Exactly. That’s the whole point re why the GC/peer ratings are also flawed and, IMO, should not be so large a share of the total (22.5%). The deans, presidents, etc themselves admit they’re not the ones filling out the surveys because they don’t feel sufficiently knowledgeable. And they believe whoever is providing the info to USNWR also is probably not sufficiently informed to do so. As you said, it would be laborious to really offer an informed opinion on the programs of other schools. For the GCs, they are more involved with “best fit” for the students they support, not “best schools”. And so very few of them actually submit the survey (I think it was 7%?)

@ucbalumnus
“However, the USNWR student selectivity criteria, also 22.5%, is mostly based on SAT/ACT scores”

I believe it’s 12.5%, not 22.5%. Test scores are 65% of the 12.5%, so 8.125% of the total score. A much smaller piece of the pie than the GC/peer component at 22.5%

I find it interesting that most of the posts seem to focus on the issues with using test scores in the rankings, and not as much with what I personally feel are obvious flaws in the GC/peer component. Which to me is even more problematic because of it’s large weight. Because that was really the point of the OP - why is so much weight given to one vs the other. I agree that test scores are an imperfect measure of student ability, and GPA/rigor is probably a better measure (if only those things could be standardized for the purposes of a ranking service). But, just curious, are most people not troubled by the GC/peer component??

@waitingmomla Right on! =D>