22.5% of US News’ weighting is based on perception, which is a “soft” number. Instead SAT/ACT scores are “hard” numbers, which cannot be as easily manipulated through marketing and volume repetition.
US News’ weighting rewards schools like BU, UCLA, UNC, Emory, NYU, UVA (where the difference is > +20).
At the same time, it penalizes schools like Caltech, Vanderbilt, Notre Dame, Northeastern (where the difference is < -5)
For example, Northeastern is ranked 40 by US News but the average SAT/ACT rank is 22, a difference of [-18]. Caltech has a [-9] difference, despite the unquestionable caliber of its students.
I’m not even sure what you mean about “ACT Score Ranking.” What does the [-6] mean for Notre Dame?
If anything, I’d say the UC schools like UC Berkeley and UCLA are actually penalizing themselves on the testing data, as the UC system does not superscore the SAT or ACT.
This sounds like an interesting topic, if I could figure out what you’re getting at.
They weight the perception of GC’s and people in academia more heavily than SAT/ACT scores because they think the opinions are more revealing about the academic quality of a school than the scores.
There are limits to how much a school can move any of the factors, but there are ways to raise average scores that aren’t necessarily tied to actually improving the academic quality of a school.
@GeronimoAlpaca It appears the order is by average ACT score, the next number is the USNWR rank, and the last is the difference between the two. So big negative numbers are those schools being hurt by the fact that USNWR does not use ACT/SAT scores that heavily in its rankings, big positive numbers are school that are helped by the USNWR criteria.
You should ask them, send them an email. Again the weighting of the criteria and the criteria themselves are all subjective. However o will say that the ranking would be deemed baseless if Vandy was raked 5 and Stanford was ranked 14 instead of the other way around.
According to universityofcalifornia.edu, UCLA has slightly higher SAT scores than both Berkeley and UCSD for the admitted class of fall 2017. I don’t know where you got your data from.
The data is from cappex (I am unable to post the link here, but google “cappex highest act sat scores”). These are the scores reported by universities to the U.S. Department of Education.
I am not suggesting that rankings should literally follow the order of SAT/ACT scores, but a difference of +20 or more should at least raise questions regarding the validity of the weightings.
Rankings are based on the criteria that the ranking entity thinks are important. If you think that only SAT/ACT scores are important in ranking colleges, then you are free to publish your own ranking using only SAT/ACT scores and promote it in the marketplace of rankings.
Because there’s no correlation between test scores and successful life, the top colleges implemented holistic admissions policy as a better method of gauging the probability of the admitted students’ success post-graduation. Test scores are over-rated and over-esteemed. As long as the applicant’s test scores demonstrate that the applicant can handle the rigor of academic work, the adcoms turn to the applicant’s qualitative evidences of the probability of future success. When my son’s test scores passed what I’d call the “threshold” for top colleges (1500 for SATI, 700 for SATII and 33 for ACT), that was it for test taking. As for the GPA, the “threshold” was 3.9 unweighted and 4.6 weighted and top 5-10% in the class. When my son first expressed his academic ambition to target the valedictorian honor, I told him to forget it. The top colleges know that there’s no meaningful correlation between being a valedictorian and future success. For this reason, there’s very little percentage weight given for the applicant’s valedictorian status as with the test scores. It’s meaningless to rank the schools by test scores given the holistic admission policy in effect.
The references above relate to students entering the class, not students who are admitted, but enroll elsewhere. The numbers can be notably different, particularly at colleges where most admitted students enroll elsewhere, like UCLA and Berkeley. For example, UCLA’s student profiles at http://www.admission.ucla.edu/Prospect/Adm_fr/Frosh_Prof17.htm indicates admitted students had ACT scores of 30/34, while enrolled students had scores of 27/33.
You’ll also find very different scores among specific programs at both UCs. The difference between majors is even more extreme at certain other California publics. For example, out of county San Jose State applicants with a similar 3.95 GPA would require a near perfect SAT score of 1570+ to meet SAT eligibility index for a CS major, but would meet the SAT EI requirements for ~half of other majors with the minimum SAT score of 400.
The USNWR weightings are completely arbitrary. USNWR engineering (and various other) rankings only use one criteria – the percentage of academics indicating the engineering college is “distinguished.” No other factors influence the ranking order. This strikes me as odd, but USNWR is entitled to use whatever weightings they choose. However, I expect you were referring to national rankings for which USNWR weights the portion of academics/counselor indicating the college is “distinguished” at 22.5% and SAT scores at 8.1%. Again this strikes me as odd, but it is not inherently incorrect, nor is it inherently correct. Instead the weightings are arbitrary and not the weightings that best reflect the criteria that is most important to you or anyone else. If a +20 point difference in SAT scores is really important to you, then apply to colleges that have the higher average SAT scores (as well as a safety).
Be careful if you use these ranking sites for anything meaningful. In engineering undergraduate ranking it’s like almost everyone in the top 30 programs is tied to each other. It just gets silly. I use it to see trends over like 5 year period… Realistically they very seldom change much… It’s good for bragging rights, I guess. “my son goes to the 6th tied engineering school in the United States”
Scores only tell you about input, not output. Obviously having smarter students will allow profs to teach to a higher level but it does nothing to improve the quality of the teaching or the services available to students.
The peer and GC rankings are useful because they take into account other factors that aren’t qualifiable anywhere else, except perhaps to some extent in the $/student section. Faculty engagement with students, disciplinary factors, career counseling, availability of on-campus research, support for learning differences, and many other areas don’t have discrete rankings but are important for student growth.
For instance, the GC of one of my kids’ schools warned us away from a school because he’d seen a lot of kids develop drug and alcohol problems while there. The school tended not to kick kids out, meaning these issues weren’t reflected in graduation rates to the same extent they might be at another school. He would have given the school a lower rating than a peer school.
On the other hand, one of my relatives credits the fact that her professor brought her to a conference and encouraged her to present a paper for her ability to get a job in a very tight field. This is the kind of thing that doesn’t get formally ranked but which faculty at other institutions notice. Likewise faculty and school leadership are aware of strong support for faculty in non-financial ways and the scuttlebutt on what’s going on with the administration at peer institutions.
FWIW, this is an interesting analysis by WashPo of the different parts of the US New rankings. It criticizes both the test score component and the “soft component” – aka the peer/GC opinion
The critique of the test scores is more or less what you would expect – that some good students don’t test well, test scores don’t always tell the whole story, etc.
But I think the piece sheds an interesting light on the peer review component. It seems that even some of the very people providing that information to US News question the weight of its value –
“Some presidents, provosts and admissions deans have told me over the years that they don’t fill out the forms themselves because they don’t really have a deep understanding of other schools’ programs. And they doubt that many of those who do complete the survey possess a deep understanding. How many college leaders have time to investigate and then rank their competitors fairly? Counselors know about a lot of schools because they help students decide where to apply, but their jobs are to find the best student-college fit, not figure out which school is better than the other. Besides, the 2018 rankings include data on more than 1,800 colleges and universities, including nearly 1,400 that were ranked. Counselors generally have a group of schools with which they are familiar and can’t be expected to be able to rank the quality of a lot of schools. How valuable, then, is this important factor?”