"Race" in College Applications FAQ & Discussion 12

This is surely just a coincidence and is not related at all to the lawsuit. It’s not like the number of Asian American applicants have been increasing year over year and have consistently been held to a ceiling of 20% of the class over the last 25 years
https://www.nytimes.com/roomfordebate/2012/12/19/fears-of-an-asian-quota-in-the-ivy-league/statistics-indicate-an-ivy-league-asian-quota

This comes after the university just officially gave guidance on the use of race (banned from using race for everything but the overall rating) after decades of giving zero guidance on the use of race and just letting admission officers use race as they see fit wink wink.

https://www.thecrimson.com/article/2018/10/29/reading-procedures/

All of this happening when they are being sued is just a coincidence of course.

The number and percent Asian applicants during the lawsuit sample is below (using lawsuit analysis racial definitions, which differ from IPEDS and other federal reporting). I also listed how the Plantiff’s expert estimates that the Asian admit share would change without “Asian penalty”, assuming full sample and full controls

It is not a steady increase each year. All captured numbers seem to vary from year to year for largely unclear reasons. The gap between % admits with and without “Asian penalty” seems to get smaller over the sample, although the direction was not consistent in the class of 2018, so it may be a coincidence. In 2018, there appears to be an abnormally strong Hispanic preference for unclear reasons. White share also increased. Only Asian share had a substantial decrease below expectations from previous years. Note that the primary reason the Asian % admits is consistently below Asian % applicants under these controls is that I am including hook groups, for which Asians are underrepresented, such as athlete and legacy.

Asian Applicants and Admits: Harvard Lawsuit
2014 Class – 6402 applied, 26.2% of applicants, 21.7% of admits (23.7% without penalty – 2.0% gap)
2015 Class – 7316 applied, 25.9% of applicants, 22.0% of admits (23.2% without penalty – 1.2% gap)
2016 Class – 6586 applied, 22.5% of applicants, 22.9% of admits (23.6% without penalty – 0.7% gap)
2017 Class – 6574 applied, 23.7% of applicants, 21.9% of admits (22.4% without penalty – 0.5% gap)
2018 Class – 7231 applied, 26.4% of applicants, 21.3% of admits (23.2% without penalty – 1.9% gap)
2019 Class – 7260 applied, 24.4% of applicants, 23.8% of admits (23.9% without penalty — 0.1% gap)

@tpike12 FWIW I do think your daughter would have gotten into one of Ivies with her stats assuming she wrote decent essays but not necessarily HYPSM. But I have to say I have seen many Asian American kids with almost exact stats as your daughter often denied from Berkeley and UCLA.

My recommendation for your daughter is to make sure she applies to Honors Colleges with merits. With her NMF status and high stats, she would be able to attend an Honors College for free. Personally, I rather go to Univ of South Carolina or UCF or Univ of Arizona for free than UVA with significant costs.

I looked at some of the studies posted by @Data10. My conclusion from reading between the lines is that there is a tremendous effort to devalue standardized testing by the powers-that-be. To get a more unbiased perspective, I decided to look into a related field, industrial psychology, and quickly got to the work of Frank Schmidt of Iowa:

http://people.tamu.edu/%7Ew-arthur/611/Journals/Schmidt%20&%20Hunter%20%281998%29%20PB.pdf

I think the evidence is over-whelming, don’t you? For those who still think job performance and academic performance are unrelated, Have a look at this study:

https://www.researchgate.net/publication/8922929_Academic_Performance_Career_Potential_Creativity_and_Job_Performance_Can_One_Construct_Predict_Them_All

I’ll summarize the gist of the two papers linked above: the single biggest predictor of academic and job performance is intelligence. The second most important predictor is conscientiousness. Perhaps fortunately, the second is essentially uncorrelated with the first.

Conscientiousness is a factor for all academic and job performance tasks. However, although those papers don’t address the issue squarely, intelligence becomes increasingly important as you move across the spectrum of positions. Not surprisingly, the desirability and remuneration associated with a job tends to be highly correlated with the intelligence required.

If you are going to read the above papers, you might as well add Gottfredson’s seminal paper from the personnel selection world. Unlike the relatively narrow sample population in the second Kuncel, Hezlett & Ones paper, the below paper also surveys some of the conclusions the US Armed Forces, based upon millions of data collected over decades from all points in the cognitive distribution:

https://www1.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

That’s nice, but your post has little direct relation to the quote you posted, recent posts in this thread, or the topic of this forum; so if I were to reply to the content, I expect it’s likely to trigger another series of post deletions.

Using a more direct relationship to recent posts in this thread at the topic of this forum, some colleges have gone test optional as part of an effort to increase URM enrollment. Does this change in testing in an effort to increase URM enrollment cause employers as a whole to frown on graduates from such colleges compared to comparable colleges. For example, do employers avoid Bowdoin graduates compared to other selective LACs? Will employers start avoiding Chicago graduates now that they are test optional?

Unfortunately Bowdoin has little detail in their public employment surveys at https://www.bowdoin.edu/ir/data/student-outcomes.html . The survey reports Bowdoin grads are for the most part are either employed or in graduate school, but there is little more detail. A variety of employers that many of think highly of are listed (Google, Apple, Goldman Sachs, Bain, …), but it’s unclear whether they are among the most common employers or whether they recruit on campus. At least Google has had special events that specifically target Bowdoin.

One of the few pieces of information that is available is average salary 10 years out for students who claimed federal aid, which is listed below. I compared Bowdoin to the other NESCAC colleges, several of which have a similar distribution of majors. Bowdoin has the highest reported salary and only ~2% of grads report seeking employment, so this suggests many quality employers have no problem with the school being test optional, although again this isn’t very conclusive, with a biased and small sample and varied distribution of majors.

Bowdoin – $66k
Trinity College – $66k
Amherst – $65k
Hamilton – $60k
Bates – $59k
Williams – $59k
Colby – $58k
Middlebury – $58k
Connecticut College – $55k
Wesleyan – $55k

The employer survey I listed earlier suggests employers as whole emphasize relevant work experience, such as internships (particularly past internships at their company) and employment during college. College major is also one of the more influential criteria. College reputation was the least influential surveyed criteria. Nothing in the survey suggested that whether the college was test optional would have much influence to typical employers.

The Schmidt study is important because it shows what the best practice is. They examined 85 years of research and through meta-analysis, came to their conclusion. This is a far cry from the advocacy research that I am seeing in education today. Any study that assumes all GPAs are the same can not be taken seriously.

The main goal of the elites is not to increase the URM enrollment, but to facilitate the transmission of privilege through the generations and in doing so, grow the institutions’ wealth and influence. By using different admission standards for different groups, they are using an age old strategy, divide and conquer to conceal their true intent.

I am interested in the “truth”, however one choose to define it. They may do away with standardized testing, but the students’ major will give it away, imho.

We can not assume salary and rate of employment are a good gauge of employer satisfaction at this time. Much more data are needed.

I agree that employers are most concerned about work experience, then the major, and finally college reputation. I have seen it played out here in the Great White North. Absolutely.

“College reputation was the least influential surveyed criteria.”

I posted this on another thread on college outcomes, the response to this will vary depending on who in the company responded, the recruiter/HR or hiring manager. Agree, in silicon valley high tech, undergrad college is not a a factor at all once you get to the interview stage. Also very few are going to admit they use college in selecting a candidate,even if the survey is indicated as being private. And these hiring managers probably think reputation means having undergrads who went to ivies, Stanford, MIT. And of course they’re going to see a whole lot more people from local colleges and state flagships. I think the irony is that some of them may have been hired or at least given interviews because they’re from Berkeley, i.e. just based on its reputation, but will say because UCB is not an ivy, that reputation mattered little.

“2019 Class – 7260 applied, 24.4% of applicants, 23.8% of admits (23.9% without penalty — 0.1% gap)”

Again the Asian penalty is not the gap per se, but because there’s a soft quota at Harvard, Asians are being compared to other Asians, and quality of the applicant pool is better at least wrt gpa, scores, AP classes, even ECs.

Ignoring the limitations of the study, colleges as a whole do not focus on maximizing future employee performance ratings of students. Colleges do have a variety of other metrics they emphasize or estimate. One of the metrics colleges are most concerned about is graduation rate. Graduation rate one of the most influential components of almost any college ranking, including USNWR. In addition to influencing the college’s reputation, graduation rate is publicly reported on nearly any college website, interests both current and potential future students, impacts alumni and employers, and has implications on internal planning and resources. Graduating on time is also by no means a given at typical colleges. Across all 4-year colleges in the US, the NCES reports that only 41% of students graduate. Even at highly selective private colleges where >90% graduate, colleges are still concerned with how their graduate rate stacks up against peers.

Studies generally use the GPA information they have available, which often does not include things like information about rigor of classes. HS GPA is no doubt a flawed and imperfect metric, so if SAT was far superior, then I’d expect to be far more influential than studies using GPA without controls for rigor. Instead GPA generally predicts the metrics college most care about better than SAT I or similar. For example, the study at https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=7306&context=etd compares how various simple metrics predict 6-year graduation rate at a large public (probably Utah state). HS GPA was a poor predictor that only explained 9% of variance, yet that 9% for HS GPA was far more than the 4% predicted by ACT score.

**Large Public Graduation Rate/b
AP Credits – Explains 2% of variance in graduation rate
ACT Score – Explains 4% of variance in graduation rate
HS GPA – Explains 9% of variance in graduation rate
HS Class Rank – Explains 9.5% of variance in graduation rate
Combination of All of the Above – Explains 10% of variance in graduation rate

The previously referenced study of ~80,000 students at in the University of California system came to a similar conclusion. HS GPA + background only explained 7% of variance, but that 7% was nearly double the 4% predicted by SAT I + background. The UC study also came to a similar conclusion about SAT II as the previously linked Harvard Deans of admission quote, with SAT II subject tests being a much better predictor than SAT I. Other studies I am aware of with similar controls came to reasonably similar conclusions. The specific numbers differ from college to college but always involve only a small portion of graduation rate predicted by stats. Perhaps the bigger issue is what predicts the other ~90% of variance in graduation rate that does not appear to depend on GPA/SAT stats. This partially relates to why selective colleges generally use a holistic admission system and consider more than just stats, as well as why they try to make the college affordable for nearly all attending students.

University of California Graduation Rate
SAT I + Background – Explains 4% of variance in graduation rate
SAT II + Background – Explains 6% of variance in graduation rate
HS GPA + Background – Explains 7% of variance in graduation rate
HS GPA + SAT I + Background – Explains 8% of variance in graduation rate
HS GPA + SAT II + Background – Explains 9% of variance in graduation rate
HS GPA + SAT I + SAT II + Background – Explains 9% of variance in graduation rate

While colleges care about graduation rate, colleges also care about what fields students major in, rather than just graduating in anything. This doesn’t mean that colleges are trying to maximize the number of students in “hard” majors, with a goal of ~100% of students in engineering, CS, or similar and none in “easy” humanities majors. However, colleges also don’t want to admit a huge portion of students who switch out of their planned major and/or are unsatisfied/unhappy with the major they end up in. This is particularly relevant when lessening admission standards for URMs.

Again the reasons why students switch out of majors are complex and go far beyond stats. One review of >50 studies about why students switch out of engineering is at https://www.rise.hs.iastate.edu/projects/CBiRC/IJEE-WhyTheyLeave.pdf . Some of the more common reasons were high school preparation, in classroom climate, and race & gender related issues. One survey of Notre Dame engineering students that goes in to more specific numbers is at https://engineering.nd.edu/resources/publications/PredictingSophomoreRetention.pdf . ~30% of students switched out during sophomore year. SAT scores were not a statistically significant predictor of who switched out at a 95% level with any of the available combinations of controls, including ones that excluded college GPA. Some of the variables that were highly significant at a 99.9% level are below .

99.9% Significant Predictors of NOT Switching Out of Engineering at Notre Dame
Intro to Engineering Class Grade
Has Aspirations for Future MBA, Eng. Grad School, or Eng. Job
Overall Semester GPA
Motivations Include “Makes Use of My Skills” (99% significant with no control for grades)
Motivations Include “Serve Community / Help Others” (95% significant with no control for grades)
Number of HS AP Credits (Only if no control for grades, no influence with control for grades)

The well known Arcidiacono (same Arcidiacono from Harvard lawsuit) study about switching out of quantitative majors at Duke found the following with full controls. URMs were far more likely to switch out of engineering and other quantitative majors at Duke, and Arcidiacono could almost fully explain that difference by the weaker admission criteria for URMs. SAT score was significant at the 99% level when looking at score alone, without considering the rest of the application. However, when considering the admissions ratings for other portion of the application and the year 1 harshness of grading effect, SAT score was not longer a statistically significant predictors at the 90% level and was less significant than a variety of application components, particularly the HS curriculum rating. Arcidiacono could only nearly fully explain the URM switching behavior when considering these additional application criteria beyond scores.

**Most Significant Predictors of NOT Switching Out of Engineering at Duke/b
Being Male (99% significant)
High Admissions Reader HS Curriculum Rating (95% significant)

Colleges do certainly also care about employment and post graduate success, even if the focus is not employee performance ratings. I’d expect the primary focus to be more on the things colleges report on their website, which varies from college to college. This includes graduates successfully finding quality work that is related to their degree where they are not underemployed. I’d expect colleges also care about a good portion of grads being hired by companies that many think of as desirable and graduates generally having a good median salary. They also would like to see some famous grads who are incredibly successful, as well as grads who financially give back to the university. Colleges also care about success in graduate degrees, as well as related careers. These metrics are often more difficult to analyze, but I’m sure their prediction goes well beyond just looking at standardized test score.

The two studies mentioned in the previous post are both performed on large public colleges. The majority of the students at those colleges would also be from those states. This is not the same population as the students matriculating at top selective private universities which take students from multiple states and often have a more rigorous course offering than do large public colleges. IOW, there’s no reason to believe that the results of the studies (that GPA is more predictive of grades than standardized testing) apply to the results at selective private colleges.

The linked survey actually does separate responses by HR, hiring manager, and executive. For all 3 groups, internships & work experience were ranked as most important; and college reputation was ranked as least important.

They also specifically ask about public flagships. Surveyed employers ranked flagships higher than any other surveyed college type by a small margin. The margin was larger for tech than most other industries.

Public Flagship – Executives rated 3.91,Hiring Manager rated 3.94, HR rated 3.82
Nationally Known College – Executives rated 3.83,Hiring Manager rated 3.75, HR rated 3.77

In any case, I’m sure reputation can be influential in certain situations. If a particular SV employer has a strong Berkeley network and has hired a lot of Berkeley grads who did well, then it may positively impact future hiring decisions for Berkeley grads, as well as decisions about how to allocate recruiting efforts. However, there are likely to be many other more important factors in the hiring decision.

The gap between the expected Asian admission and actual Asian admission is one of the key points of the lawsuit and the crux of whether there is a “soft quota” at Harvard. If the gap was 0, that would indicate Arcidiacono’s model calculated that White and Asian applicants who had the same GPA, AP, ECs, and dozens of other application variables had an equal chance of admission. In short that there is no bias against Asian beyond their specified admission policy that favors athletes, favors legacies, and emphasizes a variety of non-stat criteria.

The specific differences in rating by race are below. This is for the full sample, including hooks, which are more common among White applicants. Asian applicants had higher stats and substantially higher academic rating. However, the average of all ratings was higher for White applicants than Asian applicants. This higher White average rating largely relates to the Athletic rating, which influences admission for non-athletes as well as recruited athletes (particularly for non-recruited athlete applicants with a 2 athletic).

Academic Stats as Reported by Plantiff
SAT II Subject: Asian +0.31 SD higher than White
SAT Math: Asian +0.28 SD higher than White
Average AP Score: Asian 0.08 higher than White
HS GPA: Asian +0.06 SD higher than White
SAT Verbal: Asian +0.01 SD higher than White

Average Reader Ratings On Scale of 1 to ~5, as reported by Harvard OIR
Academic Rating: Asian +0.23 higher than White
Interview Overall Rating: +011 higher than White
EC Rating: Asian +0.05 higher than White
GC Rating: Asian -0.02 lower than White
Interview Personal Rating: Asian -0.02 lower than White
LOR Rating: Asian -0.03 lower than White
Personal Rating: Asian -0.13 lower than White
Athletic Rating: Asian ~0.3 lower than White

The previously referenced Cornell study found that a combination of SAT, rank, and other variables only explained only 4% of variance in 6 year graduation rate. Range restriction may relate to why so much less of the variance in graduation was explained at Cornell that at the not as selective publics. The statistically significant predictors in that small 4% explained included both HS class rank and math SAT (verbal was not statistically significant), as well as things like being female, not being Black, not claiming FA, and not attending a private HS. I emphasized the less selective colleges above to avoid this severe range restriction.

If you want a larger sample with more states, the DARCU review at https://heri.ucla.edu/DARCU/CompletingCollege2011.pdf includes 210,000 college students attending 356 colleges at a wide variety of states. They found the following. They do find that SAT score has a significant correlation in isolation (don’t give specific variance explained or correlation numbers), but that information seems to be largely duplicated by GPA and almost fully duplicated by GPA + survey information.

GPA Alone – Explains 14% of variance in 6 year graduation
GPA + SAT – Explains 17% of variance in 6 year graduation
Full Model Including Survey Questions – Explains 26.9% of variance in 6 year graduation
Full Model with SAT Excluded – Explains 26.8% of variance in 6 year graduation

But again, private selective colleges don’t tend to struggle in the area of 6 year graduation. The vast majority of kids who make it into a private selective college will graduate. So it’s not terribly interesting or relevant how GPA or SAT scores plays into graduation unless you’re looking at applicants on the fringe where their overall qualifications are so much lower they might not hack it.

I never understood, and still don’t, why 6-year graduation rates, or even 4-year graduation rates, are good measures of the qualities of the colleges or their student bodies. The only conclusion one can draw from a low graduation rate is that the fit between the college and its student body aren’t as good. No conclusion can be drawn on whether the college is too rigorous or demanding for its students, or its students, on average, just aren’t as capable or qualified.

Actually, college graduation rates are strongly correlated to student selectivity measures (such as the usual HS GPA, SAT/ACT scores). It should not be surprising that colleges with the highest academic measurements (like highest HS GPA and SAT/ACT scores) tend to have students who are best capable of handling full course loads without failing anything.

In terms of other fit factors, the main one affecting graduation rates is probably cost and financial aid, since those reasons are prominent in causes of dropping out of college. Also, needing to work part time to afford school may require taking reduced course loads or semesters off, resulting in delayed graduation.

There’s no denying that higher graduation rate has some positive correlation with some other factors that measure the quality of a college, but that positive correlation in itself means little. For example, higher graduation rate could be a result of lower academic standards, in addition to the factors you listed. It’s the effect, not the cause, of those other underlying factors. Only by measuring the underlying factors can we truly understand what’s driving the quality of a college.

If you are referring to the quality of the college, rather than the quality of the students, then measuring that becomes much less connected to admission selectivity, and tends to be viewed much more subjectively (although lots of people and many college rankings tend to think of the quality of the college as mostly based on the quality of the students).

Obviously, quality of the college can influence graduation rates, but that is at the relative margins compared to how quality of students influences them.

The point was graduation rate is a metric colleges as whole care about, as do USNWR rankings, and many others involved in comparing colleges. Colleges are usually far more concerned about maximizing graduation rate than they are about having classes with the highest possible average GPA prior to curving and grade inflation/deflation effects or estimating future employee performance ratings. Yes, graduation rate depends on far more than just stats or “quality of colleges student bodies”, and colleges are well aware of that. However, other metrics of college performance besides graduation rate, such as cumulative GPA, generally show a similar pattern.

In any case, I agree that students at Ivies and similar are rarely failing out. It is far more common to switch out of a major when struggling (some Ivy-type students consider getting a B+ struggling enough to switch majors) than failing to graduate. The former is actually quite common at many highly selective private colleges. For example, the referenced Duke study found that 54% of Black students who indicated that they expected to major in a quantitative field instead majored in humanities or social sciences – the majority. Arcidiacono was able to nearly fully explain the larger switch out rate among Black students based on a combination of admission differences for URMs and gender differences (72% of Duke Black students were female, and being female was the strongest analyzed predictor for switching out of engineering). The referenced Notre Dame study found a similarly high engineering attrition rate.

SAT score was not a statistically significant predictor of engineering attrition in the Notre Dame study. The author writes, “Although SAT scores do not reliably predict retention, the number of credits earned due to AP does.” The Duke study found a stronger SAT correlation in isolation than did the Notre Dame study. However, after controlling for other aspects of the application including reader curriculum rating, SAT score was no longer a statistically significant predictor with full controls. Instead the strongest academic predictor of engineering attrition was reader HS curriculum rating. This fits with the Notre Dame study finding the stronger correlation with number of AP credits . Higher AP credits likely corresponds to a better HS curriculum rating and generally being better prepared for the intro engineering classes. Note that this is not the same as having a higher HS GPA. HS achievement was a notably weaker predictor than was HS curriculum.

@Data10 When grouped by race, what were the actual differences seen in the reader HS curriculum ratings for the studies referenced?

We have talked about the current policies of race in college admissions, but I have been thinking about how the race/gender combination may give African American males an even larger advantage in elite admissions due to the large gender gap present (64% of all African American college graduates were female while the amount of masters and professional degrees obtained by Black women approaches 70%). Duke is the highest rated institution (USNWR) in the Southeast and still has a large Black gender gap (even more amazing when you consider Duke has a D1 football team with no female equivalent which would boost the percentage of black male students). Besides being an athlete, a high achieving Native American student, or maybe being the top student in a geographical unique state for elite college admissions (Dakotas, Idaho, Montana etc.), being a high achieving African American male might be as strong of an admissons “subgroup” that exists.

An interesting article on medical school admissions in arguably the most prestigious medical journal released today: https://www.nejm.org/doi/full/10.1056/NEJMp1808582?query=TOC

The premise of the article is that, (1) URMs face persistent inequalities in health care quality and access, (2) there is growing evidence that minority patients have greater satisfaction and better adherence to medical treatment when they are cared for by “racially and linguistically concordant physicians”, and (3) the same populations with worse health care are underrepresented in the US physician workforce and in medical schools.

The authors basically argue that there should be at least the same percentage of URMs in medical schools as in the general population and that medical school admissions need to fix this.

Of the 3 points above, (1) and (3) are factually true, but I wonder if the only or best solution to (2) is to match the proportion of URMs in medical school to that of the general population. It’s interesting that the article does not discuss any other possible solutions to this.

Is somehow medical school admissions similar to that of elite undergraduate college admissions, where it’s “obvious” that there is something wrong if the proportion of URMs in the class doesn’t match that of the general population? It doesn’t seem that there is as much outcry for matching URM proportion for graduate school in physics, math, etc. but I’m sure that some would argue that there should be.