To determine whether requiring the test “adds something” besides being a duplicate confirmation, you need to compare the result with the test added to the application and without the test added to the application. This is different from looking at whether the test is correlated with something in isolation, without considering the rest of the application.
For example, some colleges support an optional 3rd LOR. How would you determine whether that optional 3rd LOR “adds something” and should be required? If you reviewed whether the 3rd LOR is correlated with outcome in isolation, you’d probably find that the 3rd LOR has a statistically significant correlation without outcome, so you might conclude a 3rd LOR should be required. Using the same logic, you might conclude a 4th, 5th, and 6th LOR should also be required since each LOR has a statistically significant correlation without outcome in isolation, without considering anything else in the application. However, if you looked at whether the 3rd LOR adds something beyond the other 2 LORs and rest of application; I expect you’d come to a very different conclusion about how much the 3rd LOR adds to predicting outcome, and instead would be fine with the 3rd LOR remaining optional.
NYC tales: My child had the SAT and submitted: it was given at their large public NYC specialized high school in early March, ahead of school closure. Score was high, so was not planning to retake. I am a tutor, but did not prep my own kid - both of my kids just read my prep books on their own and take practice tests. They do not want me to intervene! Was the same for the SHSAT (specialized high school tests in NYC). However, we had signed up for December 2020 subject tests for MIT (before they eliminated them) and our car (street parked) was rammed the night prior. When we went to the car in the AM to get to the exam, it was totaled and we had to postpone. That test was rescheduled 5x and never happened; in most cases, it was listed as taking place until 24 hours prior to the exam, then cancelled. When MIT said not looking at them anyway, we gave up.
I have been a classroom teacher in NYC high schools and during pandemic, I opted out to tutor. I tutor all ELA, so no math. I do not only test prep. I coach college essays, help students with assignments, learning disabilities, enrichment, etc. Students I tutored - some are public school, some parochial, some charter, some private. A significant portion of my practice is pro bono (my kids’ school, while one of the top public schools in the US, is 45% free or reduced lunch). I volunteer because I think the game is rigged against kids who do not have the resources to shell out for quality prep and do not have families who understand how this all works. Many kids who are high achievers but poor do not have families who a. speak English proficiently, b. qualify for College Board waivers so, in NYC, cannot afford cars, Amtrak tickets, plane tickets, etc. So, yes, many of my students could not go to Connecticut for tests. Their parents are often essential workers (one of my kids has a Dad who is a security guard and a Mom who is disabled). People were sick. People died. They were caring for siblings, just trying to stay afloat at a very demanding school during remote learning. Many students in our school needed devices delivered because they did not have ways to log in or reliable wi-fi. So, if you have the means to make this work, please consider that yes, many of my students who can and do score 1500+ with minimal prep did not have a way to take this test.
Women average higher grades than men at all levels from elementary school to college. Women also average higher graduation rates from both HS and college than men. The fact that males average slightly higher combined college admission test scores than women in spite of males averaging lower college GPA and lower graduation rate may be suggestive of a problem. It is certainly not suggestive that we need a standardized test in which men average higher scores than women to compensate for males lower average GPA, in order to make things “fair.”
Note that the overwhelming majority of selective colleges are test optional not blind. In such a system, males who have higher scores than expected for their HS grades can submit their highs scores, and those scores will be considered. And women with lower scores than expected from their HS grades can apply test optional. We see this effect in which groups choose to apply test optional. For example, in my earlier post I mentioned that the Bates study found submitters were notably more likely to be male than non-submitters. Specific numbers are below. Other larger studies have found the same pattern, such as the study of 24 test optional colleges at https://www.nacacnet.org/globalassets/documents/publications/research/defining-access-report-2018.pdf .
Other groups that tend to be overrepresnted among non-submitters and are more likely to benefit from a test optional policy include lower-income, URM, and first generation. Some specific numbers from the Ithaca are below:
Ithaca Applicants
Test Submitter – Mean family contribution = 10% Pell, 26% URM*
Test Optional – Mean family contribution = 17% Pell, 40% URM*
Ithaca Admits
Test Submitter Admits-- Mean family contribution = 15% Pell, 22% URM*
Test Optional Admits – Mean family contribution = 29% Pell, 35% URM*
Ithaca Enrolled
Test Submitter Enrolls-- Mean family contribution = 18% Pell, 19% URM*
Test Optional Enrolls – Mean family contribution = 30% Pell, 31% URM*
*Ithaca includes Asian students as part of their URM category. Only ~4% of Ithaca kids are Asian.
This didn’t happen at Ithaca. Note that test optional admits above were skewed to lower SES. It also didn’t happen at any of the other 24 test optional colleges linked in the NACAC study above. At all 24 of the test optional admits averaged lower income than test submitter admits.
More applications means the colleges require more resources to evaluate the applications, such as hiring more readers. Some of the colleges that have had especially holistic applications prior to COVID, also received tens of thousands of applications prior to COVID. The correlation with an especially holistic evaluation seems to more follow selectivity than having a small number of applications . Have we seen any evidence that colleges had less opportunity to conduct holistic reviews this year? We saw that some of the colleges with the largest application increase said they needed an extra week to finish going through the larger than expected number of applications, but I am not aware of any that said they had to take short cuts in their usual review.
That was what the UC study you linked suggested. It did not recommend keeping the SAT. It recommended that UC develop a new test to replace it, and UC has mentioned plans to do so in the future. However, it’s unclear whether that will ever happen.
One thing I’m sure of is that this change that competitive college admissions is going through is not done. Perhaps someone will come up with a better test. If I had to guess, though, I would say that ten years from now many more competitive schools will be test-blind – taking your arguments to their logical conclusion, if test scores add little to nothing to the review that can’t otherwise be gained from other sources, why should admissions departments allow a biased* test to be submitted at all?
Well California is the leader here and not allowing test scores at all for public schools.
Read in the paper today that there is a bill before Colorado legislature to allow schools to decide if they want to consider test scores (right now they are required, but of course schools can assign little weight to them).
Another example: schools in the San Diego Unified School District had no testing since March of 2020. No testing the rest of the current school year. Schools have also been remote since March of 2020; students are still not back on campus yet. Even cancelled PSATs for some students.
@socowonder’s comment about being willing to drive a long distance for testing (and not fly) is not an outlier.
And now letters of recommendation are in the cross-hairs:
Of course privileged groups tend to do better in high school, too – and wealthy parents can afford tutors for their kids – so it’s unclear why high-school grades are an appropriate standard for evaluating applicants.
LoRs do give an advantage to some applicants. Consider these high school environments:
Elite prep or magnet school with teachers who are coached on writing LoRs and not generally overburdened, and a dedicated well connected college admissions counseling staff who can direct students to best match colleges and best match teachers for LoRs.
Ordinary public school where teachers and counselors are somewhat overburdened, and students compete for rationed LoRs because the burden on teachers and counselors would be too high if they accepted all LoR requests.
Low SES public school where teachers and counselors never have any practice writing LoRs because the few who go to college go to the local community college or minimally selective local state university.
Yep. The only thing I have read to counter that (from the cite above) is the format of the LoR questions. In an elite prep school it is almost impossible to say “this kid is one of the best of my career”. A super strong student is more likely to stand out at a public, underserved school.
Doesn’t make up for the advantages of a prep school overall, of course. But I do think the AOs take the high school into consideration and try not to compare apples to oranges. It is just a lot easier to pick apples.
Can’t remember the exact CC thread, but a few years ago an admissions officer posted that she had received a LoR from a teacher at an elite prep school which IIRC stated that the applicant was “the finest student I have ever taught”. It was the the fourth time in five years she had gotten an application from someone who was “the finest student I have ever taught”. LOL LoR’s.
In following A2C on Reddit, I also see a surprisingly large number of students who are being tasked with crafting their own LORs. The teacher then goes in, tweaks it a bit, signs it and sends it off. Totally unacceptable practice IMO.
Overburdened teachers who may not know the students well enough to give much more than generic recommendations anyway? Does not make the described practice a desirable one, though…
Seriously? I mean I know people are busy, but still. Of course, my husband has to write his own performance review these days so I guess I shouldn’t be surprised this practice has made its way down to high school (as my own boss I have dispensed with performance reviews).
In my first job, we would get performance reviews written by our supervisor after most large projects. I then moved to a competitor and you had to self-evaluate and go over it with your supervisor.
On my first engagement at the new company I was modest about my performance, even though I thought I had done very well. I did not want to “toot my own horn” too much. The supervisor basically nodded and agreed with everything.
I quickly learned to take credit for everything I did. In fact, I was the sole person responsible for the sun rising every morning. I learned that most people want to avoid conflict so as long as I had tangible things to speak of, I was seldom challenged. I am sure this one change I made resulted in a significant impact on my annual earnings and bonus.
For my government agency, we had to answer the questions for our reviews and give it to our supervisor, who then created the performance review. It was on a scale of 1-5 and yes, you had to ‘up’ your performance if you wanted to get an above average one. Most supervisors had 8-10 people, but he/she may not have been the supervisor to all those people for the whole year. Most supervisors gave 4s to good workers and there were few 5s.
I was friends with my final supervisor, but there was one person in our group who was terrible. They actually check surveillance camera in the lobby to prove she was hours late for work every day. She came to a meeting one day, set at 10 for her convenience about 40 minutes late and said she had a ‘brain fungus.’ Anyway, my friend had to do the evaluation. I said to friend if I were writing it, I’d give her two “2s” (basically failing) and two “3s” (average) because that would average to a “3” and avoid the hell ( and union complaint) that failing her would cause, and then I’d say absolutely NOTHING in any of the comments. Anyone seeing the performance would know exactly what that meant, that this woman was a horrible employee but that the supervisor couldn’t say that in the review. My friend said “You know me too well. That’s exactly what I did.”
With tests being optional now, a lot of students are taking a shot, a very long one at it, at top schools. It just creates a lot more noise than usual.
“I know kids whose parents do not speak English, have an elementary school education, are working almost a full time job, participate in sports and extracurricular activities, volunteer in the community and get 1290 or in the 1300s in the SAT with zero prep, no Khan, no doing practice test.”
I think those kids would have gotten in even before test optional, it’s not like 1300s are bad scores. However they probably wouldn’t have applied, so it does help them, agree.
“think people are upset because TO is changing the landscape at some elite schools and making admissions even more competitive - especially for unhooked (not legacy, athlete or child of major donors) students. For most kids, their chances are small to begin with - even with a high score - and this makes it tougher”
I don’t think it’s just elite schools, however you want to define that, as being tougher to get in, it’s many schools outside the top 20 as well. These are place where say the admit rate is 40% so a match and it’s 15-20%, now a reach.
"That was what the UC study you linked suggested. It did not recommend keeping the SAT. It recommended that UC develop a new test to replace it, "
All these studies don’t include test optional at stem majors right, especially engineering or pre-med at public universities like Berkeley? Until you include public schools and majors that have lower graduation rates, more weed out, it’s really only partial analysis of test non-submitter vs test submitter. You’d have to find similar graduation rates between test non-submitters and submitters across engineering majors at say UCB or Purdue to claim there’s no difference, especially if you’re assuming the test submitters have considerably higher scores.
This might be somewhat off-topic (I did not read the whole discussion above) but, in my opinion, preparing for the SAT/ACT allows the student to fill some gaps in basic grammar and math which our otherwise highly-ranked public HS somehow forgot to teach them. Maybe the private schools teach grammar but in our public school districts I see graded papers for grammar for the first time in the AP lit class taken in senior year.
My D18 was editor-in-chief of the school’s award-winning newspaper, a voracious reader, and really talented creative writer. Turned out there were some grammar rules that she did not know! After prepping over those rules for a couple of weeks, she got the max score in the reading/English sections on the ACT. I am sure, prepping for the math section helped even more to solidify her quantitative skills.
My S21 who reads far less than his sister benefitted even more. There is nothing wrong with prepping for these tests as this ensures a minimum baseline in math and English. So, regardless of who these tests helped or not in the college admission, I think they are kind of useful.
I agree. I attended some of the best public schools in our state from 6-12th grade, and I never had a grammar or vocabulary test. On the flip side, my kids went to one of the worst high schools in the same state, and they learned how to diagram sentences. I thought the BC Calc AP exam was the easiest test ever and have an excellent Trig foundation, but yet I never saw matrices until college. There are gaps in every education.