Admission Officers Name the Most Important Application Factors

I don’t think anyone knows that. I work in an environment where, hit the 750 and you satisfy. In the end, they may choose more with higher scores, but that’s the bar. If a legacy presents with, say, 700s, nothing assures that they get a bye. In fact, during the working reviews (several layers of that,) legacies, URMs, etc, are not in separate cohorts or piles. (Yes, they are coded as such, but not separate efforts. And sure, in the end they may lighten up a little on legacies, then. But underqualified is underqualified and the goal is kids who can thrive.)

But I find CC info often confusing, sometimes misleading. When Brown, P, Dart and Stanford, at various points, showed the number of applicants in stats tiers, the vast majority were rejected. 90%+. A 1600 and/or 4.0 kid is not rejected simply because he’s not hooked and so his bar shifts upward. The non stats portions are crucial.

So when a kid does have the stats strengths and rigor, they are still subject to the rest of holistic.

When one says, re: grades and strength of curriculum, that, “It’s the most important, there’s probably not even a close second,” so much of holistic is missed. You have to have the range and rigor of ECs, the quality of thinking, awareness of the college/your match to all they look for, and much more.

Would it even be legal to establish these thresholds? How are these cohorts defined?

Behind all this, I don’t think a kid should entertain applying to highly selectives with no idea what matters. Or what the college is about. That’s elementary research. Assuming can be deadly. And CC thrives on the stories of random kids who do get in, despite some lacking. But the reality is, you want to better your own chances, not leave this to chance, not go with the simplistic, “If you don’t apply, you won’t know.”

High school has one scheme for being a Top Dawg. Do well, be popular, lead some things, win some awards. The colleges are a different game, an admissions contest.

.

The formula for academic index is public knowledge. It is essentially 1/3 converted GPA, 1/3 SAT/ACT, and 1/3 SAT subject test. However, AI is used by The Ivy League athletic conference to set minimum stat standards for athletes. The basic idea is athletes must average no more than 1 standard deviation below the rest of the class average. Individual key contributor athletes may be more than 1 SD below, but the average for athletes can be no more than 1 SD below the average for non-athletes. As such this minimum AI stat requirement for athletes requires calculating AI for all students.

Non-athletes go through a significantly different process that I expect does not emphasize AI. Harvard (and likely others) do rate applicants on a 1-5 type scale on a number of categories including academics. And the academic rating is well correlated with AI, just as the academic rating is correlated with the individual components of the AI calculation (grades and scores) .

I believe thibault is more talking about stats that are highly likely to result in rejection, rather than automatically being admitted based on just stats. For example, during the lawsuit period, the majority of non-ALDC black applicants to Harvard had a 4 or worse academic rating,. Non-ALDC applicants with a 4 academic rating had only a 0.02% admit rate. 99.98% were rejected – virtually impossible to be admitted. The admit rate was dropped to exactly 0 for non-ALDCs who received worse than 4 academic – nobody was admitted over the multiyear sample… A 4 or worse academic rating is very well correlated with stats (more so than the better academic ratings), so the majority of non-ALDC Black applicants to Harvard appear to have stats that for all practical purposes guarantee rejection. Most non-ALDC Black applicants to Harvard appear to be wasting their time and cost of application fees. With better knowledge about what stat range has a shot of admission for non-ALDC Black applicants, this could be avoided.

Regarding “demonstrated interest,” it matters significantly … in early decision! Binding early action or other early action items, like applying early for scholarships, are indicators to the school that a student is seriously considering attending. Really what they are trying to determine is the student’s likelihood of matriculating. Obviously, binding ER is the ultimate indicator. While not “demonstrated interest,” having a sibling attending or a parent alum has a positive correlation with matriculation. Anything that gives the reader increased confidence in the likelihood of matriculating are considered, as long as the applicant is qualified. Visiting the school? Probably not much, unless maybe if it’s referenced and leveraged in an essay.

The implication was that the bar is set higher, “very likely set far north of 1500.”

And it’s not all about stats ranges.

For highly selective private colleges, the most important factor is possessing a hook or two.

What we do know from the lawsuit is unhooked, RD applicants at Harvard have a near zero-chance of admission if they get an academic rating below two. This corresponds to a 33+ ACT or equivalent SAT scores.

MODERATOR’S NOTE:
This thread is not meant to be a Harvard Lawsuit Thread Redux, so let’s not turn it into one. And let’s not engage in debate on data analysis.

While superselective college admissions is not all about stats ranges, there are likely stats thresholds below which the applicant is effectively automatic reject (which may be different for hooked applicants), but there is no stats threshold which is automatic admit, since something more than sufficiently high stats is needed for admission.

But applicants below the effective automatic reject stats threshold are wasting their time and application fee.

Data10: "I believe thibault is more talking about stats that are highly likely to result in rejection, rather than automatically being admitted based on just stats. "

Correct. The key here is TRANSPARENCY. If the de facto threshold is actually a 1550 SAT score for unhooked middle- and upper-middle class applicants, then be honest about it and just say so. And if the AI for football players is 180 or 190, then say so. If it’s 220 for fencers or squash players, then say so. And so on, for each of the other cohorts.

An excellent example of how this ought to be done comes from our military, which publishes, and strictly adheres to, minimum-SAT thresholds when it comes to awarding ROTC scholarships to high school seniors.

Beyond that initial SAT screen (plus the results of a physical aptitude test scored using very standardized criteria), the ROTC high school scholarship awards process becomes subjective. It’s heavily dependent on the results of an exhaustive, formal interview that assesses leadership potential, motivation to serve, management ability, bearing and poise, and other factors that the military considers directly relevant to one’s likelihood of finishing, and succeeding in, a demanding 4-year pre-officer training program.

But even that second stage of the military’s scholarship decision process is obligated-- by law and by military policy-- to follow very strict guidelines and to use detailed scoring rubrics that, while not publicized actively by the military, are nonetheless available in the public domain, hence are TRANSPARENT.

In other words, the military’s scholarship award process is “holistic,” just like the elite college admission process. Unlike the elite colleges, the military’s scholarship-officer candidate selection process is extremely transparent. If you fail to win a ROTC scholarship, you know why, and if you want to win one, you know exactly what is required by way of preparation.

The result is much greater trust and confidence in the process and in the military generally. For the elite colleges, we’re heading in the opposite direction. Graduates of those institutions will likely find their achievements and talent called into question, as in the common snarky online response by Ivy leaguers when they wish to insult a fellow student on a discussion thread: “Legacy, athlete, or URM?”

If the elite colleges would just follow our own military’s example and make their admissions criteria & admissions decision process transparent to the applicants, a host of evils would be extinguished.

“But I find CC info often confusing, sometimes misleading. When Brown, P, Dart and Stanford, at various points, showed the number of applicants in stats tiers, the vast majority were rejected. 90%+.”

It was not 90%, before they pulled the info, Dartmouth reported 30-35% of valedictorians got in, Brown was 24% for val, 16% for sal. That’s a long way from 10%.

Yale says, “…academic strength is our first consideration in evaluating any candidate.” It doesn’t say it’s the end all and be all.

Sure, I don’t think anyone disagrees with that, and review of transcript is not the same as stats.

TM, I think the confusion- and my issue with the thread link- is that kids do focus on stats, it’s the basis for top hs standing, their frame of reference. But many who read a report like the one cited then think stats are the primary factor in college admit decisions. Other than rack-and-stack, it’s a very important element, a ‘first consideration,’ sure. They want kids who show, via their stats and rigor, that they can, indeed, manage the challenges of a tough college. But it’s not the whole, not a predictor, not a sole or primary basis to use when matching yourself. No guarantee, not a make-up for some other issues. Not an indication that, with top stats, you can glide in.

I no longer have access to those old BPDS tables. But again, eg, being val is no special hook, in light of all the bullets they will look at and for. Correlation, not causation.

Thibault, I don’t agree the “de facto threshold is actually a 1550 SAT score for unhooked middle- and upper-middle class applicants.” What I see, is: hit 750 and fine. Then attention turns to the rest- which is where those top performers often miss. This is the vital part.

If you have 1500 and, in all the other ways, are compelling, it’s still possible that, when deciding actual admits, after all the culling, they lean toward higher scores. Sure. They can, there are thousands of contenders and they’re then cherry-picking. That doesn’t make it de facto. Not when choosing to apply.

And this “bar” is flexible. Maybe you have lopsided scores, but maybe the rest of what you present (not talking hooks) is so valid that they’re willing to take a chance on you. They can see from course choices, results, activities, the approach one takes in the app/supps, whether a slightly lower score is not a reason to dismiss. Therein lies the problem, for too many: they don’t understand the holistic nature. They don’t have a sense of what a college does look for, can be submitting blindly. They struggle to pass full muster.

This isn’t just about tippy tops. Any college (looking for more than just warm bodies) wants to know you get them.

lf: “Thibault, I don’t agree the “de facto threshold is actually a 1550 SAT score for unhooked middle- and upper-middle class applicants.” What I see, is: hit 750 and fine. Then attention turns to the rest- which is where those top performers often miss. This is the vital part.”

Err… that’s exactly what the term “threshold” describes: an initial gating mechanism, or screen if you like, after which the non-academic criteria are applied to the resulting shortlist of applicants. You’re restating my point.

But the problem with the non-academic criteria is that there is no attempt to explain how those criteria are weighted or applied.

What is almost certainly going on is that the adcoms are bending their application of those criteria to achieve predetermined outcomes for certain targets related to “building the class”-- what data scientists would call “optimizing” for those desired class-wide outcomes.

The only question is at what stage of the process this bending occurs: Is it happening only at the last stage, via intervention by the head of the adcom, or does it occur subtly at each step of the admissions process, via the lower-level readers’ revisions to their scoring of the subjective non-academic criteria.

This latter approach btw is precisely what used to happen at UCLA back in the day, according to Prof. Tim Groseclose-- see his book, “Cheating.” Prof. Groseclose also dumped the (masked) data for several years of UCLA undergrad admissions onto the internet for you to download.

“What is almost certainly going on is that the adcoms are bending their application of those criteria to achieve predetermined outcomes…”
You know this how? It’s a common refrain.

But figuring out what else matters is not that hard. It’s certainly much more than some survey that says academic stats are number one. And if a kid can’t try, what says his stats merit a place at a highly selective?

“You know this how? It’s a common refrain.”

See Tim Groseclose’s tell-all account, with the actual raw admissions data - the scores and demographics across dozens of dimensions on each of, iirc, ~70,000 individual UCLA applicants - of how the UCLA adcom of which he as a professor was a participating member fudged the scores for large numbers of applications to achieved desired race-based outcomes.

The name of Groseclose’s book is “Cheating.” He posted the data files to his blog. You can download the data and see for yourself the “before” and “after.”

“TM, I think the confusion- and my issue with the thread link- is that kids do focus on stats, it’s the basis for top hs standing, their frame of reference. But many who read a report like the one cited then think stats are the primary factor in college admit decisions”

I don’t know how anyone can read that and think stats (sat/act/sat2s/aps/psat) is the primary factor when it’s #4, unless you’re saying grades are stats, which I don’t generally agree with. The question is, unless the survey went to mostly rack and stack schools, why aren’t adcoms putting holistic factors higher - essay, recs, ECs, interviews, but I think the colleges that responded are not that selective, but you could have higher acceptance rates and still be holistic.