Only if they smarten up and join common app.
Of course, one could argue that their individual add with 4 required essays actually limits the amount of apps. They could still require them - but would widen their sphere if on common
Only if they smarten up and join common app.
Of course, one could argue that their individual add with 4 required essays actually limits the amount of apps. They could still require them - but would widen their sphere if on common
Not a free link but here it is:
In my opinion, that is still useless. These factors are not in any way on a similar scale where any weighted addition will result in any meaningful number or ranking.
I encouraged my kid to rank colleges by thinking about head to head matches based on where they would like to go and not focus on trying to engineer a numeric rating.
I agree with that too - that’s how mine ended up at the #16 rank of 17 acceptances.
In fact, when you start with - ok, let’s cull the list and they say - it’s so hard, etc. etc.
That’s exactly what we do - ok, if it’s UF and U Tampa, etc, who wins.
And you keep doing that and it really reduceds the count.
Doing head-to-head could lead to A>B, B>C, and C>A which may be as confusing as ever to a student. Indeed this happens a lot when ranking college football teams and is a source of never-ending arguments among pundits/fans. For better or worse, a linear ranking of colleges is more acceptable to the general public, although perhaps multiple such rankings — private, public, and LACs — may be the way to go as some posters here suggested.
Depends on the college. For schools, such as LACs, where undergraduate teaching is put to a premium, your concern would not be justified. At my kids’ schools, undergraduate TAs helped grade tests, were available during office hours to help with homework, etc., but did not ever teach. And there are no professors who don’t teach. Student/teacher ratio was important to us.
It seems the issue is that large publics (and maybe even large privates, like USC) are just not for you.
Answered my own question, I think:
Bibliometric Indicators
The bibliometric indicators used in U.S. News’ ranking analysis are based on data from Clarivate’s Web of Science™ for the five-year period from 2016 to 2020. The Web of Science is a web-based research platform that covers more than 21,100 of the most influential and authoritative scholarly journals worldwide in the sciences, social sciences, and arts and humanities.
It can. But the point of the exercise was to choose colleges to apply to, so a full ranking wasn’t needed or desired, just a general sense. And later to decide where to enroll, where all that mattered was #1.
do any of the ranking lists factor in how happy the students are, quality of life…?
that seems kinda important to me.
maybe Niche rankings use that more? not sure.
it does feel a little weird to me to penalize schools that have wealthy kids. like it doesn’t mean they arent getting a great education and career opportunities, just because they are full pay.
Still is (here the priority in the Natl Univ rankings with meaningful standardized test data):
Peer assessment | 20 |
---|---|
Graduation rates | 16 (or 21) |
Graduation rate performance | 10 |
Financial resources per student | 8 |
Faculty salaries | 6 |
First-year retention rates | 5 |
Borrower debt | 5 |
College grads earning more than a high school grad | 5 |
Standardized tests | 5 (where available) |
Pell graduation rates | 3 |
Pell graduation performance | 3 |
Student-faculty ratio | 3 |
First generation graduation rates | 2.5 |
First generation graduation rate performance | 2.5 |
Full-time faculty | 2 |
Citations per publication | 1.25 |
Field weighted citation impact | 1.25 |
Publications cited in top 5% of journals | 1 |
Publications cited in top 25% of journals | 0.5 |
For LACs, “the peer assessment response rate … was 28.6%”, and each responded was asked to only respond to LACs they were “familiar with”. So for a single LAC that means n% of 28.6% - and that subset produces the most important ranking factor.
It’s a popularity list, with some pseudo-science thrown in to make it seem less subjective.
And graduation rate performance (the third-highest factor) is essentially a measure for how severely USN had incorrectly predicted a school’s “target” graduation rate. So it rates USN prediction/grouping mechanism, not the school. A school could have stellar graduation rates, but less stellar than what USN think they should… and down you go.
But small differences in graduation rates by themselves are of extremely limited value in terms of the quality of education offered to undergraduates, as are most of the other supposedly objective factors. And the subjective viewpoint of peers (“popularity list”) seems at least as relevant as the rest of the factors, which isn’t saying much.
IMO any numerical ranking is pretty useless.
The question asked is:
“rate the academic quality of peer institutions with which they are familiar on a scale of 1 (marginal) to 5 (distinguished).”
I’m sorry - but even if you are “president, provost or dean of admissions”, how much do you objectively know personally about the academic quality of another college/university unless you had worked there in the past 10 or 15 years – or how much of your “knowledge” is really just a feedback loop of public perception, which now gains a clout of authority because of who is spewing it out.
What’s really sad is prior to 2017, UChicago only had EA and RD and still managed to have a yield rate around 65%, matching Princeton at the time. It would have naturally gone up into the 70s by now.
I always think of UChicago’s admissions director, Nondorf, as being a combination of genius and used car salesman.
Some of their major rankings are peer based.
So I rate a school #1 or #10 or whatever.
Why should that matter?
Subjectivity diminishes any rating - in my opinion.
Good point. All the SLACs you mention above (#26-30) are indeed good schools, and what I have seen is that kids interested in SLACs are looking at many in the top 30 and maybe even a few others past 30 (if geography or another factor is important). For example, if you are looking at SLACs in the Northeast you might also look at other NESCACs, such as Trinity and Conn College, or you might consider Skidmore in NY. For West Coast kids, you might also add Oxy, Pitzer, Scripps, etc. Ranking, for many, is often not the deciding factor, as so many posters have mentioned.
The part of my sentence you cut off clarifies that I don’t see this as particularly relevant. The whole thing is flawed, as are the other attempts at ranking.
In other words, I’m not defending the ranking. I think it is a joke. It’s just that acting like the peer review portion is the problem creates the false impression that one could come up with something better, and I don’t think that is the case. Like I said, any numerical ranking is pretty useless.
I noticed Kenyon really fell. #39. They are universally loved on here, with more love than others now rated above like a Bucknell, etc.
I’ve also heard tell that the “president, provost or dean of admissions” will have an underling actually fill out the survey.
This! Emory gets a -4 for that performance metric, an overall score of 84. It would be tied with Vandy at 18 , overall score of 88, otherwise.
Exactly.
The questionnaire response rate isn’t great either, as most presidents/provosts/deans throw the survey into the trash.
From this year’s ranking methodology, the peer assessment explanatory section:
Of the 4,734 academics who were sent questionnaires on the overall rankings in 2023, 30.8% responded compared with 34.1% in 2022. The peer assessment response rate for the National Universities category was 44% and the National Liberal Arts category was 28.6%.