The Ohio State kid with an internship vs. the Penn kid without - who was just a student - will likely stand out more.
Obviously, there’s certain areas Penn will give an instant lift - but not all.
In the end, employers want to see activity - that’s why on job descriptions they often say - Bachelor degree or equivalent experience.
I also think when you talk about Ivy, etc. you are talking about a small percentage of society.
Do I think the person at UGA #49 or whatever it is starts off any worse than the UCLA kid at #15 or Rutgers at #40? No.
I think they may end up living in a different area with different costs and different taxes, etc. but will have access to the similar jobs - short of an industry or two where one stands out against the other.
Some schools already do this, and for admissions, not just pre-processing…the CSUs are an example as humans do not look at the vast majority of CSU apps.
The issue with AI evaluating academic credentials is that it has to be done within the context of the high school. For example, if a HS doesn’t offer AP classes, an applicant from that school with zero APs can’t be disadvantaged over an applicant who took 15 APs at a HS that has many APs and allows students starting in 9th grade to take APs.
I do agree we will see more schools using AI, starting with the stats based schools. I expect the holistic schools will not be first movers here. For example, Yale just changed how they read apps (a senior AO is going to take a first pass at all apps and toss out those that won’t be competitive) before the regional AO reads the apps from their territory. I expect Yale could have taken the step to use AI to do this first pass, but didn’t.
I’m not one that puts great stock in rankings, but nor do I believe that all colleges provide an equal level of education. I seem to be on a perpetual search to look for measures that will help to identify schools that are providing a quality education.
There have been comments regarding the quality of peers as compared to the educational quality of the institution in terms of how the institution is improving the student’s education, skills, etc. The ideal for many is obviously high peer quality along with an educationally effective institution.
My question is: If a choice had to be made, would one pick a school with “higher peer quality” where the institution provides little (or even negative) academic progress vs. a school where stats don’t reflect a “higher peer quality” but that the school is doing an excellent job in helping students to really grow and advance?
it seems to me WSJ/College Pulse is partially Payscale rankings , except they include net price, students’ ranking of their own experience, + some social mobility stuff. since net cost (which WSJ includes + years to pay off college cost) is something USNWR ignores
I know lots of schools don’t offer APs, but it’s weird that AP scores seem to be non-considerations for any of the rankings. But in “theory” could be a nationwide standardization.
I know some will get mad at bringing up the 2014 expose on how Northeastern gamed the system , but since “reputation” is still 20% of USNWR weighting it seems that gaming is still effective
e.g. based on undergraduate population size how many go to top med schools or law schools or grad school. The WSJ ranking looks a lot closer to the top 10 producing top law or med students/undergrad population than USNWR.
med schools have hard GPA cutoffs. If you go to caltech with little grade inflation vs harvard with a lot (most common grade an A-) a random or equal kid will have a tougher time coming from cal tech with median grade vs harvard median grade. It’s why almost all the top 15 schools in all the rankings have grade inflation/high curves.
MCATs and LSATs are hard numbers, (and the reputation when applying for next thing matters not compared to those scores). I would get rid of “reputation” and replace with median MCAT and LSAT score of graduates… (if it were possible)
The problem with “feeder” rankings boils down to correlation vs causation. “Feeder” is a misnomer for law and med school. In addition, only a small subset of college students take LSAT or MCAT. Now, if you are suggesting a standardized test for all college grads, that’s not going to happen, but an interesting idea nonetheless.
10-15% of classes at most of those top 20 apply to med schools. not sure what number is law schools but if you combine that’s a decent chunk of the class. unless your kid is engineering or business that’s as close to a standardized test as we can get. (short of colleges getting the actual scores for MCATs/LSATs/GMATs etc.) . And many of the “top” schools have neither undergrad business degrees or engineering.
I think this assumes the US News criteria is a good measure of outcomes. I don’t think it is. It rewards schools based in geography, schools sending kids into hcol areas get a bump for higher salaries. But with hcol, the real value of these salaries may be lower than in mcol or loc areas.
Secondly, I took a closer look at the four schools ranked either 39 or 40, here’s their respective 6 year overall graduation rates: Tufts (94 percent), Boston College (90 percent), Rutgers (84 percent) and U WASHINGTON (80 percent). I’d rank these schools differently for this factor alone.
Indeed, but that is why you would need an AI approach in the first place. If you could just mechanically use GPAs, that would be easy and would not require AI. But AI might be able to do that contextual work, or at least well enough for preliminary purposes.
So one of the schools that has confirmed doing this is Dartmouth, where the Dean of Admissions said they are using AI for the purpose of fast-tracking. Basically this is a model where applications deemed not up to their usual academic standards are only given a quick look to see if something else extraordinary stands out. If not, then it will be a relatively fast path to a rejection.
As you note, Yale is also now fast-tracking, but so far has only suggested that a human will do that first step. But with Dartmouth developing AI to do it–I am not sure how long Yale and such will hold out. It may depend on what happens with volumes of applications going forward.
I still think percent employed or in grad school within six months is a better criteria and easier to measure. For some fields, including medicine, the prestige of the graduate school is not particularly important.
AI can only do the contextual work if there is an adequate HS profile. Which is not the case for thousands of high schools. It will take time for AOs to ‘program’ AI with their specific HS knowledge.
Regardless, I do think there will be much opportunity for schools to use AI in admissions. But some of the rack and stack schools have already moved on this, and the holistic schools are the laggards.
That is their de facto “academic/teaching quality” score. Obviously it is an imperfect way to measure academic quality. Maybe they figure that however they arrive at it, deans of colleges know how to judge academic strength.
But then – how would we judge teaching quality, rigor, etc.? Student surveys? But weaker students will find things more rigorous than stronger students. Books published and awards won by faculty? But are they really teaching undergrads or researching with them? Class sizes? Maybe so, but some kids prefer anonymity. There’s no simple way to evaluate the academic/teaching quality of a school. And that’s the nut we really need to crack, as it is so important – the most important thing, really, when comparing schools.
Edit: Maybe including the % of lecturers with their terminal degree. But even that is fraught with caveats, like (as has been mentioned here many times) sometimes great teachers don’t have the terminal degree.
As usual when the USNWR rankings come out, this particular list gets almost zero attention despite its explicit attempt to get at what is being discussed here (as noted by @prezbucky in the comment before this one):
Prepare for some surprises. But also note the following!
" The rankings for Best Undergraduate Teaching, as part of the 2024 Best Colleges rankings, focus on schools whose faculty and administrators are committed to teaching undergraduate students in a high-quality manner. College presidents, provosts and admissions deans who participated in the annual U.S. News peer assessment survey were asked to nominate up to 15 schools in their Best Colleges ranking category that have strength in undergraduate teaching.
The Best Undergraduate Teaching rankings are based solely on the responses to this separate section of the 2023 peer assessment survey."
I think class size and terminal degree of faculty more relevant than another survey that likely also had a less than 50 percent response rate. Why would we assume that a leader at one school would have any clue about the quality of teaching at another?