@MojiMojo So what is your point? Yes if you use different methodologies, you will get different rankings. It doesn’t mean that one ranking system is better than another.
The reason that rankings aren’t typically done on test scores is that most people agree that they aren’t the best criteria for ranking. Why do you think more and more schools are going to test optional? It is because they are finding that test scores are not the best indicator of student success.
@ucbalumnus "Ranking only by SAT/ACT scores carries with it two assumptions:
That quality of a school is only based on the selectivity of admissions or quality of students.
That selectivity of admissions or quality of students is only based on SAT/ACT scores."
A 3rd assumption should be that SAT/ACT scores are true indicators of quality of students (intelligent, integrity, and interesting), but which are not true. High SAT/ACT scores can be achieved by repetition of practicing past SAT/ACT exam questions.
I suspect they are determined experimentally using more-or-less common data modeling techniques.
IOW there is some method behind the apparent madness.
My guess is it involves trying many combination of factors and weights that put ~HYPSM near the top without generating too many other ranks that look wildly implausible (say, from the perspective of the PA scores, which were the sole basis for the earliest rankings.) If this sounds heavily biased, it is, but may not be too different from the modeling techniques used to build translation software or to predict consumer purchasing decisions. I doubt it’s based heavily on normative judgements about the value of SAT tests etc. However, the modeling process may need to start from some shared “ground truth” understanding of what a grammatical sentence, a desirable purchase, or a “good” college looks like.
If lists started giving more weight to stats, most so called elite colleges would rank much lower and won’t be able to pull holistic cover to hide their dubious admission practices.
Another issue is test taking practices of diffrent students. My kid aced SAT in a single attempt after using workbooks and online practice tests, his friend took SAT 3 times with a max of 1500, has been attending a test coaching center since 7th grade for CTY, finally he switched to ACT and after 2 attempts scored 35. I wonder how would these lists compare students like them? Anyone would guess that my kid is poor with less oppurtunities and other kid affluent but that’s not the case, both are from very similar income families. Testing was a struggle for one using more resources, not even an issue for other. In the end, colleges wouldn’t care how they reached to their scores.
Places with higher SAT scores, obviously don’t use holistic cover and have conventional merit based admissions.
I am surprised no one has mentioned an obvious reason why USNWR would not weight enrolled SAT/ACT scores higher: That might very well influence colleges to give the test score more prominence than they already have, both in terms of admissions and in terms of awarding incoming students financial aid. It would be hard to find anyone knowledgeable who believes there should be more reliance on standardized testing. It’s just not that meaningful.
Also, while 9 places (for Caltech) or 22 places (for UVa) may seem like a lot, I think the quality rankings are so compressed that there is really very little difference. Caltech, given its limited focus and appeal, is never going to be #1, but no one who matters at all thinks, “Oh, Caltech is only ranked #10, it must be a crummy school.” Caltech is a fabulous school, and its #10 ranking is consistent with that.
I think the act and Sat should be used for helping kids prepare for college… Period…
If a kid is great in math but his subset English is poor why not have the schools give him help to prepare for college… Not to prepare for a test to get into college?
@waitingmomla,
Another issue is that only 7% of the 2,200 public school guidance counselors to whom USNWR sends the survey filled it out last year. Plus they split that 2,200 in half, with 1,100 rating national universities and 1,100 rating LACs. That’s a tiny number to have so much power. They throw out the two highest and two lowest ratings in an effort to prevent voting for one’s favorite and use a 3 year total to prevent too much bounce up and down. We can assume many of the same counselors are filling out the form every year and the same are chucking it into the trash so the 3-year average bit may not mean much.
Counselors who don’t know a school are asked to mark “don’t know”, so there is no rating for the school from them.
Doing the math*, 1,100 x .07 - 4 x 3 = 219 total counselor ratings assuming every counselor rates every school! Plus many of them are duplicates. I would think there would be a lot of “don’t knows” for LACs among counselors at schools where the vast majority of kids apply to state schools. How much does a GC from Kansas know about Hobart and William Smith or St. Anselm’s and does a GC from an school in Massachusetts really know the differences between the University of Kentucky and the University of Tennessee beyond what they read in places like US News?
*Excuse my lack of proper notation of order of operations-that’s one thing I definitely have not retained from my time in school!
@Sue22 Thanks for that information, very interesting. I agree, the low GC response rate is indeed a serious flaw, as is the notion that most GCs are very familiar with more than a certain bandwidth of schools. Equally problematic, to me, is the belief that the peer rating numbers aren’t also deeply flawed. I’m not even talking about any biases present, especially among peer schools. What bothers me more – if what was said in the Post article is true – is that these deans and other high-ranking members of academia admit they are not even the ones filling out the surveys (because they don’t feel knowledgeable enough to do so)! And that they doubt that whoever IS filling them out has a sufficient understanding of other schools either. So, while I understand the arguments of those who claim the test score component has its flaws, I feel the GC/peer piece is also seriously flawed. IMHO this piece should not account for such a large share of the total.
A comparison of how the rankings would change if solely based on HS counselor “distinguished”/“marginal” survey or test scores is below. Test Scores are based on 2016-17 IPEDS reporting, which occasionally has errors. Caltech has a particularly large disparity – by far the highest test scores, but barely making top 20 among HS counselors indicating Caltech is “distinguished.” Cornell has a particularly large disparity in the other direction – one small step below HYSM among HS counselor ratings, but 27th by test scores.
The problem is that ACT/SAT are just too easy for top high school performers (who are what those top schools are going after). Even though the average ACT has stayed the same over the years the number of top scores have grown substantially every year. When there are nearly 3000 ACT 36, 13000 ACT 35 and 23000 ACT 34 each year, people lose respect for high ACT scores (SAT as well). If you make ACT in a scale of 1000 with only a few top scorers over 900 each year then you cannot ignore the difference between a Caltech average of 850 vs a Princeton average of 650. And then Princeton would likely have to raise its average score and USNWR would respond accordingly by giving it more weight. It used to take Princeton a committee to reject a SAT1600 when SAT1600 was a rare feat.
@jzducol I think in percentage terms the number of students achieving those high ACT scores is still pretty small. The below table is from a Jan 2018 article (but I can’t link to it because it’s from another blog). As you’ll see, the percentage of students scoring in the 33-36 range is only 3.082%. And less than 1% score the coveted 35-36
“How Many Test Takers Get Top 1% ACT Scores?”
Score#students_% of All Test Takers
36___2,760_0.136%
3512,3860.610%
3420,4991.010%
3326,920_____1.326%
“In the class of 2017, 2,030,038 students took the ACT. The average composite score was 21.0 out of 36 (for more on how the ACT is scored, read our article). This means that a score of 22+ puts you above average.”
The top colleges for mid-50 ACT scores (using Data10’s 2016-2017 IPEDS source) are then no surprise, since they are all very selective:
I think the USNWR rankings are pretty good as a research/shopping guide. Their formula isn’t magic though. It measures what it measures and USNWR is very transparent about their formula.
The single biggest USNWR component is retention and graduation rates. Which makes sense if you are trying to measure the “outputs” of a school rather than just the inputs (lots of smart kids enrolled). Although it turns out in practice that very HQ inputs (high stat kids) invariably produce HQ outputs (college graduates in 4-6 years). HYPS can be doing a horrible job with their students, but those smart, driven students will still graduate in high numbers regardless.
If you want to compare just SAT/ACT scores, USNWR presents that data for you to peruse. They also publish a “selectivity” ranking (combination of test scores, HS grades and admit rate) which you can peruse as well.
On that selectivity measure, YPS only rank #6. H is #3, CalTech and UChi are tied for #1. Vandy, ND and WUSTL (who all are known to dig high test scores) all move up several slots as compared to their overall ranking.
“My guess is it involves trying many combination of factors and weights that put ~HYPSM near the top without generating too many other ranks that look wildly implausible.”
According to USNWR, that is exactly what they did.
They kept tweeking their formula until Yale finally came out near the top. That’s not a bad sanity/quality check for a college ranking formula. But that means that the formula literally measures “Yale-ness.” So – surprise! – Yale does very well each year. TBD if Yale-ness is a good fit for your kid or not.
It looks like the purpose of USNWR rankings is to match up to “conventional wisdom” about college rankings, but allow readers to “fill in the blanks” in terms of schools that they do not know about. Of course, it could very well be that it is now a part of a feedback loop that helps determine what the “conventional wisdom” about college rankings is.
Whether “conventional wisdom” about college rankings is actually an appropriate ranking for an individual student is another story. For many, it may not be, but for the prestige-seekers who regularly pop up on these forums, it may be a close match for their preferences.
For pure rankings and adjustments by act score like northeastern +18 mentioned in a previous post is very misleading. To do this correctly they should add all scores.
Northeastern is famous for adding students and keeping them out of their scores and admit rates. NU.in and spring admits don’t go into the number. And it inflates it.
There’s too many variables. Also sat and act need to be both represented in the ranking adjustments. Do they have elite athletic programs or none to speak of like neu. They also factor into the score.
What year and is this mid point or average. The average can be misleading. And lastly is there really a difference with a 34 and 35 act. Are the kids so much brighter to call it a better school. Hogwash. A 1430 vs 1470 That can be one or two questions.
What did the kid do in school over 4 years instead of one day.
Who has better ap scores or rigor.
Who is disciplined. Not the bright but as a not quite that hard of a worker and not quite the same student as another.
What about character and being interesting. Leadership and other eq variables. This nitpicking is mind boggling.
Just for some food for thought. Here’s the list of the schools attended by the ceos of the ten largest US companies from Fortune 500 2016. Interesting names. And they seem to have worked out just fine. Rankings are useful but don’t matter much. It’s up to the individual kid. and is it really worth all of this angst and unneeded competitiveness.
Doug McMillon (Wal-Mart Stores) - University of Arkansas (BS), University of Tulsa (MBA)
Rex Tillerson (Exxon Mobil) - University of Texas at Austin (BS)
John S. Watson (Chevron) - University of California, Davis (BA),
Warren E. Buffett (Berkshire Hathaway) - University of Nebraska (BS),
Tim Cook (Apple) - Auburn University (BS)
Greg C. Garland (Phillips 66) - Texas A&M University (BS)
Mary Barra (General Motors) - General Motors Institute/Kettering University (BS),
Mark Fields (Ford Motor) - Rutgers University (BA),
Jeff Immelt (General Electric) - Dartmouth College
Joe Gorder (Valero Energy) - University of Missouri-St. Louis (BA), Our Lady of the Lake University (MBA)
@jzducol
The population is larger now so of course more students overall will score a top score, however the percetile doesn’t change. The top 1 percent still only score around a 33/34.
re post #1 and superscoring of test results. I strongly dislike the concept of picking and choosing among several test dates. Those that can do well in all areas on a given day are better than those who concentrate on one area one day and another the next time. Bravo to schools who do not accept superscoring (although I fear they are yielding to the practice).