“These aren’t just elite institutions, they’re elitist institutions”

Not sure what types of cars in one unnamed U’s parking lot can prove. Is this per scientific principles or whatever random lot you’re near?

Lol, around here, not everyone buys a parking pass. Plenty take the bus or walk. In fact, there isn’t a lot of undergrad student parking, to begin with. More opps for grad students.

I kinda think this thread has gone off track.

Agree, not sure what point there is regarding cars at an unnamed public school. Most top publics will have plenty of wealthy kids, internationals, as well as kids from other backgrounds.

“Most kids don’t ace their apps. Whether or not you have legacy”

It’s still easier to have a better app if you’re legacy, because the parents know what to emphasize or not emphasize. Even though the parents applied a generation ago, the values of the school tend to be the same. Maybe the parents are hands-off with the app.

"So, I note that you’d need to see apps, to begin to understand ratings, overall. "

I wouldn’t need to see the apps if Harvard’s adcoms (or any other college) told us what the ratings are for each applicant, which is what they provided in the case. The adcom’s job is to rate relative to other applicants, and if the applicant deserves a 1 or a 2, I’ll take their word for it. Given that it’s hard not to see patterns about preferences, maybe strong preferences.

The group that got the most number of 1s and 2s for personal , were, not surprisingly white LDCs (41%), the group with the least were, again, not surprisingly, Asian non-ALDCs (18%). It’s not totally about race as 41% of black LDCs also got 41%. All LDCs did well there, now you could say there are legitimate reasons for that, and that’s a fair point, but you have to know going in, non-ALDCs are at a disadvantage.

“Class of 2001 – 1039 Legacy+Athlete applicants, 13,242 Non-LA applicants
Class of 2018 – 1094 Legacy+Athlete applicants, 27,512 Non-LA applicants”

I saw this as well and thought it was a little odd, given that there were about 38,000 new Harvard College alums added in those years, yet the LA applicants are the same.

“How can any one of those 19 say it was the legacy did them in if there are 18 (or more accurately 18,000) other students who could be in line in front of them, too?”

Again, given Harvard’s own data, assuming that 40,000 apply, and Harvard accepts 2400, 1390 are LDCs, of which 468 get in, athletes are 15% another 360, leaving non-ALDCs at 1572 from a pool of 38,350.

So non-ALDCs are 96% of the applicants, and 65% of the class, or another way, 4% of the applicants are 35% of the class. So I’ve concluded from this, that if a legacy gets rejected, they’re replaced by another legacy, you have these dual admission tracks, as an alum of the Harvard Legacy Project pointed out

Not all legacy parents are an asset. I can’t figure out what sort of engine some think drives admissions. Some on this thread look at results, stop there. Many even advocate confused kids look at results threads. Half the story.

“I wouldn’t need to see the apps if Harvard’s adcoms (or any other college) told us what the ratings are for each applicant, which is what they provided in the case.” If they told you? I don’t even get that. On one hand, speaking of the Holy Grail, but waiting for the instructions? Not to mention that whenever anyone does spill a few beans, many argue it can’t be so.

“The adcom’s job is to rate relative to other applicants, and if the applicant deserves a 1 or a 2, I’ll take their word for it.” But you aren’t taking their word for it. And the job, first, is to review each kid as an individual, your app, your context, your thinking skills or not.

To clarify, lookingforward, I believe that the results of the car survey will illuminate the level of socio-economic privilege at a (random) large public university, since I have an informal but not numerical basis for assessing it. The level of affluence at large public universities is connected to a reasonable assessment of how elitist the elite schools are. I am sorry I didn’t have a chance to get to it in the window of opportunity today.

We do happen to have a lot of undergrad student parking. The undergrads don’t get parking passes and the grad students do (as well as faculty and staff). This makes it easy to figure out who owns the various cars, in categories (not individually).

So you think more expensive cars will show something about the income divide? Or just the cars you do see, a random event?

If 90% are $$$, you’ll assume what?

C’mon. This isn’t nearly scientific. You aren’t controlling for anything.

There was a wonderful article in the WSJ a few years back showing the impact of Georgia’s scheme to help subsidize public education for middle and upper middle class families (at the expense of the needy) with some quotes by professors that they pass the student parking lot filled with BMW’s as they trek to the faculty parking lot filled with beat up Honda’s and Toyota’s.

Carry on.

I assume this is because, if Harvard’s class size has been broadly constant over recent decades, there would be a broadly constant number of alumni of childbearing age at any time, having a broadly constant number of children, a broadly constant number of whom would be minded to apply to Harvard.

Meanwhile, of course, many of these places are beating the bushes to get ever more applicants and doing things like buying student data from the College Board at 47 cents a name, as described in the WSJ: https://www.wsj.com/articles/for-sale-sat-takers-names-colleges-buy-student-data-and-boost-exclusivity-11572976621?emailToken=9bf2ce913452dba1f1ae75e922e21c0ci64thkCzXJtOSG5offPPDdJ8dnqFpuMPBFBvVYG9jnlOwSYDRmXO7Hw9MwcEmBMdFCjSLJVJ+f2fZqHCFOuQg0pldxeIwTRh2OUmjMaAOqesaddhizdpaEk+val5oHKtqwutIhDyhFJDsqiPYz9QQg%3D%3D&reflink=article_copyURL_share

blossom is absolutely right, in post #506. I don’t pretend to be showing what % of the students are very well off, by providing my list of the student vehicles. Many of the students are on foot, on skateboards, or on scooters. But it seems to me that the income divide in the country is a very serious issue, and it also seems to me that there are elements of socio-economic elitism fairly far down the college rankings lists. So while my list will not be scientific, it will be illustrative of a problem with the long shadow that SES elitism casts, in my view. But you can form your own opinions. I will try sincerely to post the list tomorrow.

I should add that I have seen a student parking and getting out of a Maserati on campus, though not in the lot near my building. That lot primarily serves students going to classes in Physics, Chemistry, Microbiology, Physiology, and Biochemistry.

Note that admitted ALDCS had lower average personal qualities ratings than other groups, including non-ALDC Asians. It’s only ALDC applicants with higher ratings, not admits.

During the lawsuit period, the reader guidelines for what the personal rating was supposed to entail were almost non-existent. The lawsuit docs say the personal qualities rating was supposed to reflect the reader’s “assessment of the applicant’s humor, sensitivity, grit, leadership, integrity, helpfulness, courage, kindness and many other qualities.”, but it’s not clear how much of that is passed to readers giving the rating and how much is the unknown “many other qualities.”. I suspect it was largely up to individual readers’ personal opinion about what good personal qualities entails and which applicants have them. Similarly the ratings scale was especially vague, with little detail about what specific criteria they are rating applicants on, as summarized below…

*Class of 2018 Personal Qualities Ratings

  1. Outstanding.
  2. Very strong.
  3. Generally positive.
  4. Bland or somewhat negative or immature.
  5. Questionable personal qualities.*

This changed in the most recent class, with a far more detailed explanation of how readers are supposed rate applicants on personal qualities, including comments like:

“It is important to keep in mind that characteristics not always synonymous with extroversion are similarly valued. Applicants who seem to be particularly reflective, insightful and/or dedicated should receive higher personal ratings as well As noted above, though, an applicant’s race or ethnicity should not be considered.”

They also added far more detailed and specific descriptions of the ratings scale, as quoted below.

*Class of 2023 Personal Qualities Ratings

  1. Truly outstanding qualities of character; student may display enormous courage in the face of seemingly insurmountable obstacles in life. Student may demonstrate a singular ability to lead or inspire those around them. Student may exhibit extraordinary concern or compassion for others. Student receives unqualified and unwavering support from their recommenders.
  2. Very strong qualities of character; student may demonstrate strong leadership. Student may exhibit a level of maturity beyond their years. Student may exhibit uncommon genuineness, selflessness or humility in their dealings with others. Students may possess strong resiliency. Student receives very strong support from their recommenders.
    3+ Above average qualities of character; Student may demonstrate leadership. Student may exhibit commitment, good judgment, and positive citizenship. Student may exercise a spirit and camaraderie with peers. Student receives positive support from their recommenders.
  3. Generally positive, perhaps somewhat neutral qualities of character
  4. Questionable or worrisome qualities of character*

In my earlier comments about the referenced studies, I’ve often emphasized that the models could explain the majority of variance in admissions decisions at Harvard. They aren’t perfect, but they are good at predicting decisions and even better at predicting average admit rates for various groups.

However, the same is not true of the personal rating. Things like the alumni interview personal rating are correlated with the administration personal rating, as one would expect. Obviously the inputs to the personal rating such as the “recommenders” mentioned above are alsoo correlated, as are ratings from various other parts of the application. However, with all controls including ratings and boosts/penalties for various groups, they still couldn’t even explain 30% of variance in the personal rating.

It appears that the vast majority of the personal rating depends on something does not appear to be duplicated in other parts of the application. This missing part is also not simply boosts/penalties for specific groups, as the explaining less than 30% of variance includes weights for such groups. This makes it difficult to say with certainty whether there is bias in the rating. My personal opinion is there was likely some unintentional bias, which is expected given how vague and subjective the description of the personal qualities rating was prior the class of 2023.

That said, the highest predicted personal rating occured with the following boosts or penalties for different groups. Positive = Boost, Negative = Penalty. This could also be looked at as, the positive groups tend to have slightly higher personal ratings than one would expect based on the rest of the application including interview personal rating, while the negative groups have slightly lower.

Black + Low SES: +1.0
Recruited Athlete: +0.9
Black: +0.7 (0.05)
White + Low SES: +0.6 (0.05)
Special Interest List: +0.5 (0.02)
Double Legacy: +0.4
White + Legacy: +0.3 (0.07)

Female + Plans to Study CS: -0.1
Female + Asian: -0.2
Male + Plans to Study Eng./Physics: -0.3 (0.05)
Male + Plans to Study Math: -0.4 (0.05)
Male + Asian: -0.4 (0.03)
Male + Plans to Study CS: -0.5 (0.06)

The combinations of 2 variables above are specifically modeled. Combinations of larger numbers of variables are not modeled, but one would expect the personal rating to be the furthest above expectations for the following combination – Male + Black + In All Hook Groups including SES “Disadvantaged” + Applies Early + Plans to Study Social Sciences or Humanities . And one would expect the personal rating to be the furthest below expectations for the following combination – Male + Asian or Unknown Race + Unhooked + Applies RD + Plans to Study CS

Being a low SES kid appears to be one of the better predictors of having a higher personal rating than expected… more so than “elitist” hooks. A similar statement could be said about URMs. As such, it’s not clear that the net effect of this rating is making Harvard “elitist.”

Valid points, Data10. My comment about the effect of the personal rating on Asian- American applicants was based on a report I read at the time of the initial Harvard case: The data showed that the personal rating was consistent with the rest of the application, including the interviewer’s report, for the Asian-Americans in the top academic category, but the personal rating was discordantly low for Asian-Americans in academic categories 2 through 5, including being low relative to the interviewer’s report. This is something that the aggregated data could not reveal. It is also a type of elitism, in my opinion.

Again, you have to ask the “Why?” questions. And be open to various explanations.
And remember that the qualities described per the ratings are not just about an applicant’s greatness (or not) as an individual, but within the context of building the class.

“So non-ALDCs are 96% of the applicants, and 65% of the class, or another way, 4% of the applicants are 35% of the class. So I’ve concluded from this, that if a legacy gets rejected, they’re replaced by another legacy, you have these dual admission tracks, as an alum of the Harvard Legacy Project pointed out”

Makes sense. That’s consistent with the bucket theory – there are a certain number of legacy spots, and that number doesn’t change. My point is that even getting rid of the legacy spots entirely doesn’t move the needle very much for a kid’s chances of getting in.

Any rejected applicant was rejected more because of the competition in the unhooked lake than in the legacy bucket.

@Data10, you rock. ?

I haven’t seen the report, but I suspect it is referring to AI deciles, not academic rating since that’s the way the lawsuit groups it. AI decile is a purely stat based computation, of which test scores compose 2/3 and GPA/rank composes 1/3. Asian applicants as a whole had higher stats, but didn’t have correspondingly higher personal qualities ratings to match the higher test scores So if you group by AI stat decile, it appears that Asians have notably lower personal qualities scores at a given AI score. This effect occurred at all AI deciles, including the top one. I din’t see anything to suggest academic 1 differs.

This type of grouping personal rating by test score does not make much sense to me. If you are going group personal rating by other components of the application, why not use the ones that are supposed to contribute to personal rating, instead of using test scores? For example, the reader instructions above mention “support from their recommenders” is a key contributing factor to personal qualities rating, so why not group by applicants who have top marks from their recommenders, and see if they also get higher person ratings? I suspect the answer is that grouping by AI/test scores instead of relevant components to the rating makes the difference appear more dramatic, which better supports the Plantiff’s argument.

If you look at average ratings, rather than grouping by test scores, the differences are much more mild. The Harvard OIR report at http://samv91khoyt2i553a2t1s05i-wpengine.netdna-ssl.com/wp-content/uploads/2018/06/Doc-421-145-Admissions-Part-II-Report.pdf mentions only a 0.1 difference in average rating between White and Asian applicants, far less than the difference between LDC and non-LDC applicants that was mentioned above. White students also have higher average ratings on the Alumni Interview Personal rating than Asian students (including if you exclude ALDCs), but the magnitude is negligible.

This larger difference in ratings between LDC and non-LDC is also far more relevant to this thread. Specific numbers for White LDCs and non-LDCs are below. I listed both the contributing recommender inputs and personal rating output, as well as alumni personal for comparison. The applicant personal qualities rating does seem to have more of a gap than the other components, but I think there is too little information to say much with certainty about whether the personal rating was deserved . What is more clear is that in spite of LDCs applicants getting better ratings, LDC admits had worse ratings. This is the pattern one would expect, if LDCs get a strong boost in chance of admission.

White Applicants
LOR #1 – LDCs Average ~0.16 better
LOR #2 – LDCs Average ~0.12 better
Counselor – LDCs Average ~0.21 better
Alumni Personal – LDCs Average ~0.16* better
Reader Personal – LDCs Average ~0.39 better
*Large missing component among non-LDCs, likely due to students not interviewing. Difference would likely be larger if everyone interviewed.

White Admits
LOR #1 – LDCs Average ~0.3 worse
LOR #2 – LDCs Average ~0.31 worse
Counselor – LDCs Average ~0.28 worse
Alumni Personal – LDCs Average ~0.17 worse
Reader Personal – LDCs Average ~0.27 worse

Thanks, Data10. I will see if I can track down the report I read. It clearly indicated that Asian-American in the top academic group were rated no worse on the personal scale than any other racial or ethnic group, but that did not hold for any academic rankings lower down. There, the Asian-Americans were rated lower. My recollection is that this referred to the academic rating of 1 (probably summa) by the admissions committee members and not to the stats deciles. It would not make any sense for the entire top decile of the applicants to be probable summas. It takes more than that. Maybe the top decile of the admits.

@Data10 What is going on with Asian admissions in the South/Texas and Massachusetts?

I find it interesting how close the demographics model matches the projected model, but Asians don’t do so well compared to whites in those geographic areas (and better in other areas). What goes into the demographics model?

An article today about one college’s efforts to diversity its recruited athletes specifically:

https://www.nytimes.com/2019/11/07/sports/college-sports-diversity-amherst.html

In the main example (mens soccer) , the college team has won D3 national championship and is generally near the top. So it would seem quality is not suffering as a result of getting more athletes of color and lower SES.

If they all did that it could make some impact…though would perhaps defeat one purpose of having these sports, which for some colleges is to ensure higher SES kids attend.

"Thanks, Data10. I will see if I can track down the report "

I think this is the report you’re talking about, that mwolf already linked to:

http://public.econ.duke.edu/~psarcidi/legacyathlete.pdf

around page 43 onwards is the information, but I didn’t see anything by personal rating, race, and academic deciles in one table, which I think is what you’re looking for.

Just saying. It’s not just Asian Americans in the south or MA. It’s anywhere kids apply from in droves, with similar profiles, a very narrow set of majors. The issue can be more pronounced in the Bay Area, Chicago, Houston, NVa/DC area, NYC. It helps to consider just how many stem/premed kids are in the pool.

And bullets like courage, humor, likeability, don’t express the holistic nature of forming impressions. I do think it would help some here to quit looking mostly at and/or for absolutes. Or looking at ratings as forming a hierarchical list.