One thing to keep in mind is that U.S. colleges tend to be unique in that many have a “holistic” approach to admissions. In the United Kingdom for example (and many other countries) getting into college is basically two things: standardized tests and high school grades.
While I don’t think that ACT/SAT scores and doing well in college is causation; the students I know that have achieved top 1% on these tests also tend to have great GPAs in rigorous courses. In fact, the smartest high school kid I know got a perfect 36 in all sections of the ACT on first try. She is now a junior at Harvard. The test score helped “validate” that this kid is a top performer and all of the other factors too (ex. all 5’s on AP exams, etc.).
Lastly, many colleges give full rides for a 36 ACT or SAT equivalent. There has to be some value for colleges wanting students who test well, right. These will also be the same kids that will test well on the MCAT, GRE, & LSAT, later in their career. So in my mind, there is a correlation of doing well on standardized tests and doing well on other tests required for specialization in one’s profession.
I don’t thnk anyone claims standardized tests on their own are predictors of college academic success. Their greatest value is in the word “standardized”, not in the word “test”. A transcript and school profile by themselves are even worse predictors. Yes, I know there were studies that claimed the opposite. But those studies were done poorly (my opinion) and at a time when grade inflation wasn’t as rampant (a fact). For most colleges, a combination of the two (grades and test scores) is better. The primary reason for the gradual erosion in the importance of these tests in college admissions (difficulty of the tests, superscoring, score choice, etc.) isn’t because it helps some students get into college, but because it helps the colleges increase the number of applications.
If the tests weren’t useful, why did the colleges cling to them until the pandemic made impossible for many students to take them? If they can evaluate applicants without the tests, why didn’t they all make admissions test blind?
Given the difference in test scores that you suggest are highly significant in correlating to the academic level of the courses, would you claim that Princeton (whose physics courses you extolled previously in post #68) and Vanderbilt (with similar ACT ranges) are comparable in academic level of courses?
Actually, I do not, as evidenced by the examples I gave. However, my examples show that the differences in academic level are not necessarily in correlation to student test scores, since we have examples where the school with the lower student test scores is teaching at a higher academic level (at least in some subjects) than the school with higher student test scores.
Doesn’t the UK system effectively combine the two by having standardized final exams for high school courses? Basically like taking subject matter standardized tests (like SAT subject tests and AP tests) for your final exams?
At the extremes you have places like China and India (standardized tests only) versus Canada (high school courses and grades only for domestic students), presumably based on how much the universities trust the consistency of courses and grades in the high schools.
Prior to COVID-19-related SAT/ACT availability problems, USNWR rankings valued SAT/ACT 2.5 times more than the other direct measure of student selectivity (which is class rank). So USNWR rank conscious schools have incentive to emphasize SAT/ACT more than high school record so that they can boost that component of their ranking.
Of course Princeton and Vanderbilt are peer schools, with many applicants overlapping and likely similar academics at both. Princeton and Elon? Not so much, but the average grades of admitted students at both schools are 3.9. Since this is a tangent, I will drop the topic.
Princeton and Elon have completely different average GPAs and GPA ranges of their entering class. I suspect the confusion relates to looking at UW Princeton GPA vs weighted Elon GPA. Elon uses a unique weighting system where GPAs of more than 5.0 are possible. For example, the class profile page at First-year Class Profile | Undergraduate Admissions | Elon University mentions that Elon honors fellows have an average GPA of 5.01 . I’m not that familiar with Elon, but it appears that they have very limited honors courses. You’ll generally find more honors type courses at larger colleges, such as public flagships.
Getting more back on point, yes, some courses have different levels of rigor at different colleges. Some colleges offer different course sequence options, with different levels of rigor, within the same college. However, that does not mean that SAT score is the driving force for the different levels of rigor , or that going test optional means you need to change level of rigor. Hundreds of colleges have gone test optional in past years. When colleges have gone test optional in previous years, I am not aware of any of them changing curriculum in response to going test optional, such as watering down classes or switching to teaching at a slower pace.
In a unique position here. We are in Florida, the only state that did not go test optional. My DD is only applying to in state schools because of the bright futures scholarship. She was lucky enough to to get a very strong score in December before the pandemic. We are glad her score matters and hope that it gives her an advantage at UF, but still feel for the kids that had a hard time testing.
These look like subject matter tests that are used as a percentage (varies by province) of the final course grade in high school courses. I.e. quite a bit different from the concept and use of the SAT/ACT as external standardized tests that are not supposed to be closely tied to specific high school courses, though perhaps more like a “light” version of the British O-level/A-level system where standardized subject matter tests are the entire grade.
A theoretical US equivalent would be like using SAT subject tests or similar for part of the grade in ordinary high school level courses, and AP tests or similar for part of the grade in advanced high school courses (or the entire grade if you wanted an analogy to the British system).
I’ve never seen a study that considered the predictive ability of transcript + school profile. I think transcript + school profile could have the potential to be far more predictive than GPA or SAT score. The studies I have seen instead looked at GPA in isolation, which is far less predictive, yet still generally find GPA in isolation is more predictive than SAT in isolation. The ones done by CollegeBoard usually not only look at GPA in isolation, but do so based on a self reported survey. For example, asking the student to check a box saying whether they are an A student or a B student. And seeing if ability to predict first year college GPA based on that self reported HS GPA checkbox in isolation improves if you also consider SAT score. They consistently find that self reported GPA + SAT is somewhat more predictive than GPA alone, yet often still find that self-reported GPA alone is more predictive than SAT alone.
That’s nice, but that’s not how the selective holistic colleges that have been the focus of this thread admit students . It’s not a choice of they can either admit by GPA in isolation or admit by GPA in isolation + SAT score. Instead they consider the full transcript. Did the student take rigorous AP/honors/… courses? Which courses had lower grades and how relevant are they to planned field of study? Upward/downward trend? How harshly does the HS grade? They also consider many factors besides just stats – LORs, essays, ECs, awards, personal strengths/drive, personal background context, interview, etc. The more factors you consider, the less predictive ability you lose, if one of those factors is removed.
There can be a lot of complications with going test blind. For example, recall what happened to Sarah Lawrence after USNWR dropped them from rankings for being test blind. They eventually gave in to USNWR and switched from test blind to test optional. There is far less penalty for going test optional, and many colleges did so before COVID-19. Prior to COVID-19, there were over 1000 test optional colleges, including some highly selective ones, as Chicago and Bowdoin. If the tests are essential, how were so many colleges able to remain successful while test optional, prior to COVID-19?
I don’t think it’s a simple choice of either tests have zero value or tests are essential. Instead testing generally adds a small, but non-zero benefit. The degree of that non-zero benefit varies based on a number of criteria, including what you are trying to predict (first year GPA, final GPA, graduation rate, …) or accomplish with the entering class, and what other factors besides test scores are considered in the admission process.
–Avoid the predicted decrease in applications among public HSs in NE
–Increase applications from more “racially diverse communities”
–Continue to adequately predict successful academic achievement among applicants
The first study prior to going test optional found the following predictive abilities:
GPA + SAT + Rigor + AP hours + … – Explains 44% of variance in cumulative GPA
All of Above with SAT Removed – Explains 43% of variance in cumulative GPA
Before COVID-19, in California and probably many other states, a student can enroll in a community college, take the first two years of general education and major preparation courses, transfer to a state university (college courses and grades are the primary or only consideration for transfer admission; standardized tests not required), and complete a bachelor’s degree without needing to take any of the typical standardized tests like the SAT or ACT. There may be college-specific placement tests used to determine initial placement in courses like English, math, and foreign languages, although these are not necessarily standardized beyond the specific college.
Of course, before COVID-19, there were also selective (for frosh admission) colleges that were test optional or test blind, so a student could get admitted to such a college and later complete a bachelor’s degree without taking any of the typical standardized tests like the SAT or ACT.
In my opinion, only STD tests are comparable. GPA/rank cannot be used to compare eye-to-eye because of the difference between teachers, schools, regions, cities, states. Also people can hire someone to write essays for them, or get help from parents/relatives/siblings, etc. However, 99.999% people cannot cheat on STD tests.
Supposedly, about 2 million take the SAT each year, and about 2,000 (0.1%) get their scores cancelled for cheating accusations. But then that probably means that many more cheated but do not get caught.
That does not include some other marginal practices, such as high SES areas having a huge percentage of students with extra-time disability accomodations, probably some of which are very questionable (meanwhile, low SES areas probably have the opposite problem where many students with legitimate disabilities never get accommodations for them).
As I understand it, the elites evaluate students in the context of their own school, with other students from that same school - so GPA, transcripts, rank, school profile all would be very helpful. The inability to compare grade inflation among schools becomes less of a problem. Standardized testing would matter less. So long as the school profile lays out the GPA spread clearly enough, anyway.
Standardized tests add information only if there is a mismatch between grades and scores. Isn’t the wives tale that high scores/low grades = lazy and low scores/high grades = grit?
If no scores are provided, the college has to assume scores match the grades. A kid with high grades from school with high rigor will be fine TO no matter what. A kid with low grades won’t be saved by high scores no matter what school they come from. As said above, the kids from unknown schools and less rigorous schools need scores to help evaluate whether they can do the work. High scores/high grades can illuminate uncertain rigor. But only so much. All things considered, coming from a familiar high school must be an advantage in a TO world, so long as the student has high grades, rigorous classes, and an otherwise compelling application.
I agree with this. My D is at a private hs that doesn’t really send kids to ivies but has been fantastic for her (all girls and the confidence it has given her has been wonderful.). The ones that she has access to with that ivy-feeder reputation are 40k-50k per year. She has performed well in school (which has been rigorous) and at a state and national level in science. Her SAT was canceled 4X and ACT was canceled 3x. Luckily, she still has AP exam results but studying for a test that is repeatedly canceled also has its own set of issues and at a certain point I felt that it was my job to step in and halt the madness of that.
I have to believe the schools when they say that TO won’t affect the high performing students but I believe that remains to be seen.
Kids who benefit most from test optional tend to be kids whose test score is lower than one would expect based on the rest of the application. This rest of the application includes far more than just grades.
As a general rule of thumb, students who attend top quality high schools with high post-AP level courses available (often in wealthy areas) are more likely to test well and are as a whole less likely to benefit from test optional. In previous years, this group is consistently underrepresented among non-submitter admits at test optional colleges. In contrast, the groups that tend to be overrepresented among non-submitter admits at test optional colleges include lower SES, first generation, URMs… all groups that tend to be underrepresented at top quality HSs.
For example, the Ithaca test optional study was linked in my earlier post found the following distribution between test submitter admits and non-submitter admits:
Test Submitter Admits – 15% Pell, 21% URM
Test Optional Admits – 29% Pell, 35% URM
The same pattern occurred in the Bates 25 years of test optional report linked in an earlier post:
Test Submitter Students – 34% received FA, 3% URM
Test Optional Students – 42% received FA, 5% URM
A similar pattern seems to be occurring in some of the recent admission reports. For example, the Dartmouth one that was linked earlier states the ED admitted class is as follows.
-Record high 26% from low income
-Large increase in Pell eligible to estimated 18%
-Record high 16% first gen
-Increase in URM to 36%
The numbers above are certainly not suggestive of test optional hurting kids from HSs without “high rigor.” It’s also worthwhile to note that rigor tends to be more associated with selection of courses (AP/IB/honors/…) than with name of HS attended. Most students attend HSs with multiple course options and a wide variety of different possible levels of rigor. There are also numerous ways to show rigor and academic preparedness when attending a HS that does not offer a large number of high rigor courses. For example, I attended a basic public HS that only offered 3 AP classes. I took a higher level of courses than offered at my HS at a local university and received all A’s, showing I was well prepared for college level classes. With online learning, this type of approach is much easier today.
Regardless of HS attended, it’s also desirable to show academic preparedness and fit through activities and accomplishments outside of the classroom, rather than just in getting good grads. For example a kid who wants to pursue CS might demonstrate CS related skills by captaining a robotics team, or winning a hackathon, or just doing some impressive coding projects. The influence of out of classroom activities and other non-stat components of the application on admission tends to be underestimated.
And the influence of SAT score on college success tends to be overestimated. Even studies funded by the CollegeBoard show SAT score in isolation only explains a small fraction of variance in first year GPA… and less predictive past first year. The incremental benefit beyond the combined rest of the application (more than just grades) can be far smaller, such as the earlier Ithaca study, which found 44% variance in cumulative GPA explained with all metrics vs 43% with everything except SAT – a near negligible difference in predictive ability when removing SAT score.
That’s right - “high school grades” reflect performance on standardized A Level exams. Note that applying for certain “courses” (or “majors” in US parlance) might require taking other standardized tests (e.g., LNAT for students wanting to study law).
IMO, college success can’t be measured by college GPA. If it could, why almost none of the private “elite” colleges measure applicants on their high school GPAs (instead of grades in individual courses)? There’re granularities, including course rigor, that GPA simply can’t capture. Majors also make a huge difference. There are majors/concentrations in most colleges (including “elites”) that allow practically anyone to graduate, so using graduation rate (6-year?) is even a poorer choice to measure college success. Does anyone know any accomplished scientist, mathematician, or even an ecomomist who couldn’t even score in the upper range of a standardized test? If the bar is set low enough, everything smells like a success.
Sarah Lawrence is nobody, at least in the eyes of USNWR (Sorry, Sarah Lawrence fans). Had Harvard gone test blind, you can bet USNWR would change its methedology tomorrow. With the pandemic, Harvard certainly has good reasons to go test blind, but it didn’t. Why? Caltech is the only elite school that has gone test blind, at least temporarily during this pandemic. Caltech is one of few colleges that cares less about their rankings than their missions. It was last ranked No. 1 by USNWR twenty years ago. We’ll see what happens to its USNWR ranking next year.