True for 2020, but very unlikely to be true for subsequent years.
I think my son will do fine and heâs got plenty of accolades on his resume but he got a perfect 36 in all sections on his âjust give it a shot without much test prepâ March junior year ACT. This was right before the pandemic hit and right before a summer that he thought heâd be studying for the ACT and SAT. Normally, it would mean a lot to get that score but this year it seems to mean very little (and at test-blind schools it means absolutely nothing). Weâll find out in early April and heâs got some great options already but we have little sense of how competitive heâll be for the top schools he applied to.
Fair enough. On a personal level, I will fully admit that I think it hurt my daughter this year, who was only able to take the ACT once. She did prep, for sure. In all other years, her 99th percentile score would have helped her at lots of schools â not the least of which I would count UCLA and Cal, both of which of course went test blind this year.
That said, this was a crazy year. Iâm not sure it makes sense going forward.
P.S. â also a personal level, I was one of those financial-aid kids who got into an Ivy for undergrad and another for law school, and (given my background) I donât think I would have gotten in without, as Dean Coffin put it, âseeing strong scores [so that] his team feels more confident.â P.P.S. â no test prep back in my day, lol
It is also worth noting that the SES-elite prep/boarding schools were not so academically elite many decades ago (first half of the 20th century), so HYPetc. needed to find actual academically elite students from other places back then.
However, they apparently did not intend for such âtalent searchâ by SAT to bring in as many Jewish students as it actually did.
Thatâs not an accurate summary of the history of standardized tests. The SAT was created by the eugenicist Carl Bingham at least partially to uphold a âracial caste systemâ and objectively separate groups he believed to be intellectually inferior such as women, non-Whites, and poor persons. While the SAT has completely changed to the point where in barely resembles the original version women, URMs, and lower SES persons all tend to do have lower average SAT scores than would predicted from other sections of application and tend to be overrepresented among test optional admits and test optional colleges.
Harvard did create a SAT based scholarship in the 1930s, which may be what you are referring to. Some say the scholarship was added to identify students who did not attend their usual eastern boarding school group, and some say the test was added in an effort to limit Jewish students, who many incorrectly assumed would test poorly .
Well, the standardized testing idea pre-dates the SAT. Hereâs an article from the Washington Post (https://www.washingtonpost.com/news/morning-mix/wp/2015/07/28/how-the-sat-came-to-rule-college-admissions/) and hereâs an excerpt:
"At the end of the 19th century, higher education was still very much the purview of the elite. Aristocratic admissions standards required applicants to know Latin, Greek and algebra â subjects rarely taught in the nationâs public schools. Young men (it was always, only men) effortlessly transitioned from elite academies like Andover and Groton to the campuses of Harvard, Princeton and Yale. There they were trained not just in academics but also in the customs of the establishment: playing sports, hobnobbing with their peers, wearing waistcoats.
The introduction of the very first admissions test, the College Board Entrance Exam, in 1901 began to change that, granting any gifted high school senior a shot at a private university education so long as he had the resources to pay for it. The test would still have been beyond the abilities of most Americans â the original 1901 exam included a section asking students to translate a paragraph into Latin, then rewrite it in indirect discourse â but it was marginally more meritocratic than anything that had existed before. In 1908, three years after adopting the exam as its main standard for admission, Harvard saw the composition of its student body shift dramatically: 7 percent was Jewish, 9 percent Catholic and 45 percent from public schools, according to the New Yorker."
Regarding Harvard and its President from 1933 - 1953:
âInitially, Conant only used the SAT to select students for scholarships. But the more he learned about the aptitude test, the more convinced he became that it was the mechanism by which he would create âeducation for a classless society.â He believed that the kinds of people who excelled on the test would be intelligent, hardworking and privilege-averse, that they would be better public servants than their WASP predecessors and would make America more democratic.â
And the ironic thing is, this prejudice led to the sacrosanct âholistic reviewâ in college applications: from the Washington Post article quoted above: âAlarmed at the increasing enrollment of Jews and other âundesirables,â schools quickly added other requirements intended to weed out these applicants: letters of reference, assessments of âmanliness,â personal essays, evidence of extracurriculars.â
History aside, maybe it would be worth discussing the current SAT, which was redesigned in 2016 and purports to measure academic skills against the Common Core standards. Good idea, bad implementation? There may be numerous factors to address for that question. Might a different test, or set of tests, improve upon the current testâs weaknesses, and provide a standardized measurement?
You persist in saying that âthere werenât that many students who really could not take a test by early Januaryâ on various threads.
It simply is not true.
My National Merit Scholar daughter would have knocked the SAT out of the park. She studied very hard for the SAT. And it was cancelled repeatedly for her, as it was for all her friends.
So please desist from repeating this flat out wrong statement.
And as for Dartmouthâs âmoralâ dilemma, a string of 5s on AP tests should tell them much of what they need to know in conjunction with a rigorous course schedule.
Absolutely. Couldnât agree more. And I know that many students couldnât take the tests this year. It was a weird year.
Looking ahead, however, if all we do is replace standardized ACT/SAT tests with standardized AP tests, we havenât really gotten around the problem of SES privilege influencing the measurements used for college admissions. Does anyone want to hazard a guess on the relative affluence of the pool of students who take/school districts who offer a string of APs by the end of junior year?
Selection via standardized testing has been around for more than 2000 years, so the standardized testing idea also pre-dates the CBEE. However, if you were referring to the CBEE from 1901, it was originally created to get eastern boarding schools to use a more standardized curriculum, so it emphasizes questions relating to the âeliteâ boarding school curriculum.
Some example CBEE questions are at https://www.pbs.org/wgbh/pages/frontline/shows/sats/where/1901.html. I listed the first 3 questions below . If a college was designing a test to âidentify talented students from high schools the Ivies didnât know well,â these arenât the type of questions that Iâd expect would best achieve this goal. Instead the questions favor students attend high schools that have a particular curriculum⊠a curriculum that I assume was more common at the usual âeliteâ boarding schools than elsewhere. I imagine much of the questions are well removed from questions that would be most useful in predicting who will be successful at Harvard/Ivies . It may be a more merit-related admission system than was present previously, but that doesnât mean the goal of the test is to give everyone the same fair chance, regardless of background.
Current standardized tests used in college admissions are obviously not this bad today, but they still generally add add little to the prediction of success in college beyond other existing components of the application, particularly at colleges as selective as Ivies. Instead modern tests tend to emphasize different content, different style of questions, different time restrictions, etc. For example, if Harvard wanted to predict would be most successful in a challenging engineering major that emphasizes calculus+ level math, I doubt their first thought would be to see who can rapidly answer the most simple (for Harvard students) algebra/geometry type multiple choice questions while making few careless errors. Or if they wanted to predict would would write the best multi-page critical analysis of a complex topic, I doubt their first thought would be a multiple choice test about reading a paragraph or marking grammar correctly.
Well the open secret is of course that kids from the private or top notch/affluent publics donât really need the SAT OR the APs since the longstanding relationships between the AOs of those schools and the top colleges means that the schools are really vouching for particular students irregardless of those tests.
And I didnât say a string of APS. I said a string of 5s on whatever APs they were able to take at their school. For my daughter at a mediocre public school, the max was 4 by the end of her junior year. Still sufficient information for an AO I think in conjunction with rigor and (possibly) rank.
Blockquote
Lee Coffin and his Admissions team once again proving that Dartmouth is delusional.
In any case, thereâs nothing stopping Dartmouth and other elite schools from taking students from âunknown high schoolsâ without any test scores. Itâs their (often unfounded) prerogative to believe prep school academics are more rigorous than their public school counterparts â especially as grade inflation has been been shown to statistically increase in wealthy suburban public high schools and private schools.
âThe quote from Dean Coffin, above, highlights that point, when he says that âseeing strong scores helps his team feel more confident that admitted students could cut it.ââ
This statement doesnât reflect reality. There are many academically rigorous colleges that are test optional. Many studies have show conclusively that students not submitting test scores have effectively the same performance in college as those who do. Admitted students can cut it, regardless of test scores, because grades are almost always a better indicator of ability than test scores.
Iâve already mentioned I am a test prep tutor and I can attest that high test scores are not the same as high intelligence. Work ethic counts for a LOT.
Lee Coffin saying they still need test scores to be sure an applicant can succeed isnât consistent with existing data, and sure is a bad look on the same day that WPI went test blind.
How does Lee explain the success of recruited athletes and URMs (and others), some portion of whom had lower than average enrolled student test scores? Honestly, he sounds like a dinosaur.
My father was one of those Jews attending the Ivies, much to their alarm. LOL.
I think we are in 100% agreement â a test administered in the same way, and graded the same way, across the country/world (like your daughterâs four AP tests) can provide additional useful information for Admissions Officers.
Certainly, but the test doesnât have to be their âfirst thoughtâ â that is, it doesnât have to be the primary factor â in order to provide valuable information. In Feb. 2020 the University of Californiaâs Standardized Testing Task Force issued a lengthy report on standardized testing (https://senate.universityofcalifornia.edu/_files/underreview/sttf-report.pdf).
Excerpt:
âThe STTF found that standardized test scores aid in predicting important aspects of student success, including undergraduate grade point average (UGPA), retention, and completion. At UC, test scores are currently better predictors of first-year GPA than high school grade point average (HSGPA), and about as good at predicting first-year retention, UGPA, and graduation. For students within any given (HSGPA) band, higher standardized test scores correlate with a higher freshman UGPA, a higher graduation UGPA, and higher likelihood of graduating within either four years (for transfers) or seven years (for freshmen). Further, the amount of variance in student outcomes explained by test scores has increased since 2007, while variance explained by high school grades has decreased,â
For the purposes of evaluating test optional admission, the question is not whether test scores add to prediction beyond HS GPA in isolation without considering course rigor, grade distribution in high school profile, which courses had higher/lower grades and how relevant they are to planned field of study, essays, LORs, ECs/awards, interview, ⊠The more relevant questions is what is lost when test scores are not considered and the many other remaining admissions factors that would be used in selecting test optional admits still are considered. All studies I am aware of that have reviewed anything close to this question have found that very little is lost in predictive ability when test scores are removed and the many other factors beyond just HSGPA in isolation remain.
For example, in the study at https://web.archive.org/web/20181012020332/https://www.ithaca.edu/ir/docs/testoptionalpaper.pdf 3 , Ithaca compares how much predictive ability is lost when they remove SAT from a variety of other factors that include both a measure of GPA and a good measure of course rigor. They found that only 1% of variance in prediction of GPA was lost when removing SAT, as summarized below.
Gender + Race + First Gen â Explains 8% of variance in Cumulative GPA
All of the Above + GPA + Strength of HS Schedule + Num AP Credits â Explains 43% of variance in cumulative GPA
All of Above + SAT â Explains 44% of variance in cumulative GPA
Based on the results of this study and other factors, Ithaca chose to go test optional and has remained test optional since then. Other more selective colleges that have gone test optional show little difference measures of success between test optional and test submitter admits. For example, I posted some stats from the Bates 25 years of test optional study earlier in the thread, some of which are repeated below.
Comparing Bates Test Submitter and Test Non-Submitter Students
Mean SAT Score: Submitters: ~620/~620 , Non-Submitters: ~540/~535
Gender: Submitters = 48% Female, Non-submitters =59% Female
Race: Submitters = 3% URM, Non-submitters = 5% URM
Mean Graduation Rate: Submitters = 89%, Non-Submitters = 89%
Mean College GPA: Submitters = 3.16, Non-Submitters = 3.12
Natural Science Major: Submitters = 23%, Non-Submitters = 17%
Humanities Major: Submitters = 31%, Non-Submitters = 28%
Thanks. I read the Ithaca study â interesting, though I will note that they did find SAT scores to be statistically significant predictors, although minor.
The UC report is long, but its a very comprehensive analysis of tens of thousands of students. In addition to the above summary, I thought this was on point: âCollege success measures correlate with admissions test scores. At UC, each of the primary success measures is clearly correlated with, and predicted by, scores on standard college admissions tests . . . In data from 2010-2012, 21% of variance in first-year GPA was predicted by SAT score. This number may sound small, but it translates into large differences in student outcomes.â
We can agree to disagree.