That’s a good question. However, most of the kerfuffle about TO isn’t about kids going to state schools (apart from marquee names like UNC CH, UVA, Michigan etc). Most state schools admit more than half of applicants (many admit many more than that) and, prior to TO, many of those students had very average SAT scores. At better state flagships, the graduation rate is 75% or more (and 90%+ at the elite state schools) - a lot of others struggle to graduate 60% of students (and that’s at the flagship - graduation rates are a lot lower at directionals/satellite campuses). Suffice it to say that many students are not prepared for college - that was true before TO, remains true now and will continue to be true regardless of how testing shakes out.
Often times low graduation rates are due to a lack of affordability, not college preparedness/readiness.
With that said, there are students who can’t cut the academics, without adequate supports in place. Some state schools/systems have invested significant dollars in remedial classes for students who aren’t ready for college…for example UTA, which sees their share of students who aren’t college ready due to the 6% rule.
Note that many state universities had automatic admission for meeting a certain GPA or rank, regardless of SAT/ACT score (whether or not an SAT/ACT score was required), even before COVID-19 related test-optional policies proliferated.
Yes. That’s correct. I think bringing the TO debate into a discussion about typical state schools (not the elite ones) obscures the fact that what bothers people about TO is that it makes admission to prestigious schools more competitive (or at least more unpredictable).
Thank you for posting the link to the 2018 study.
I only read the section on Academic Outcomes (pages 41-50), but I didn’t see any statement with respect to statistical significance (or stastically meaningful differences).
Comparing the applicant data pre and post-policy change shows that the average HS GPAs and the SATs of applicants was higher after the change to TO. As the authors acknowledge, grade inflation complicates interpretation of the GPA data, and selection bias complicates interpretation of the SAT data (in a TO enviroment, those with high SAT scores are more likely to submit, so of course the average SAT went up after the policy change).
Comparing the academic outcomes of those who submitted vs. those who didn’t, the study shows that non-submitters had lower college GPAs (see page 45). This was true for both the 2014 study and the 2018 study, for both first-year GPA and cumulative GPA (for graduates and for non-graduates).
In the 2018 study, the average first-year GPA was 3.21 for submitters vs. 3.03 for non-submitters. The average cumulative GPA among kids who graduated was 3.40 for submitters vs. 3.23 for non-submitters.
I did not see any discussion of whether these findings were stastically significant (but I may have missed it).
I realize a lot of kids drop out for financial reasons – college is expensive and the cost can be overwhelming, especially if a lot of loans are involved. Also, a lot of our kids are getting a subpar K-12 education which leaves them unprepared for college - it’s not their fault and it has nothing to do with their intelligence or aptitude, but it leaves them at a disadvantage. Its one of the reason I get frustrated by some of the discussions here on CC - most of the kids here will be just fine wherever they attend college. Meanwhile, kids from under resourced communities, have to settle for crumbling public schools and, often, a less than high quality education.
Hmm…I can’t look at all the studies rn, but maybe oversold the 2018 data as being statistically significant.
What often gets lost in these conversations is that most enrollment management/admissions leaders, as well as college access proponents, support test optional admissions (some support/prefer test blind). They have formed these opinions based on their experience as well as data.
I always want to ask those who don’t work in the industry why they think they know better than those with decades of industry experience, the people who were leaders at implementing test optional policies well before it was in vogue (eg Bowdoin, bates, WFU, DePaul, Ithaca). All these schools went test optional, tracked students, and decided to stick with not requiring tests. That tells me all I need to know.
I agree with all of that. The far greater problem in the US is the state of k-12 education, not the college admissions process. Most colleges accept most students…maybe I’ll get concerned when that’s not true anymore.
Exactly. From reading CC you’d be under the impression there is a crisis in terms of availability (of spots at university) when that isn’t true in the least. In reality, a lot of schools struggle to fill their classes and that is problem that is going to get worse in a few years as we face a decline in the number of college aged kids.
I think there is convincing evidence that the main issue is not disparities in the quality of K-12 education, but in disparities in what happens even before kids even start school. See How Schools Really Matter, by Downey.
Yes, colleges have supported test optional admissions. But this does not mean that tests don’t have predictive power or that non-submitting students don’t have worse outcomes than submitting students. Oftentimes other factors may be more relevant.
UC’s own study found that:
-
test scores were better predictors of first-year UC GPA than high school GPA
-
the predictive power of test scores was increasing
-
the predictive power of high school GPA was decreasing
-
test scores contribute a statistically significant increment of prediction when added to regression analyses that already include high school grades as predictors. This improved prediction can translate to fairly large differences in predicted student outcomes (e.g. fourfold changes in non-retention rates, even for students with similar high school GPAs [HSGPAs], depending on test scores).
-
Although there are large test score gaps between applicants from different demographic groups, UC does not use test scores in a way that prevents low-scoring students from disadvantaged groups from being admitted to UC as long as their applications show academic achievement or promise in other ways.
Yes UC ignored the findings and recommendations of its own study and opted to go test blind and to not pursue developing their own test (to replace the SAT/ACT).
thank you for posting the 2018 Study. Very interesting. The obvious… applications rise but not necessarily from those groups who might have been under represented. As fascinating is what happens to the groups in college. For the most part academically they are similar but those who choose to submit tests and obviously did well on them appear to gravitate toward majors and paths that stress this type of evaluation ( the sciences) and those who did not submit gravitate more towards avenues that stress this type of testing less ( liberal arts, more creative subjective areas).
For those who feel the awkward science geek who excels on these types of tests would be disadvantaged if tests werent available, the argument is they have plenty of chance to excel with their coursework or ECs and having this type of testing ( and the way society has emphasized it as a measure of success) does more harm to a greater number of kids than it does to help them.
An interesting line in the paper ( page 45 footnote 14 also on page 47) notes “it is unlikely that a submitter with high scores would disproportionally choose not to submit” - yes someone with a 36 ACT will submit but now we have kids with 33 (25%) or 1520 (25%) in top schools questioning submission and being told to do so by some GCs. That is crazy.
I don’t think it is crazy to have only high test scores represented if only those with good test scores want them considered. Only a student with aptitude for art is likely to submit an art portfolio. Those without a great art portfolio presumably have some other strength to emphasize.
I just don’t agree with prohibiting a student from sharing a strength.
Bench players on the JV basketball team can opt not to list that EC in favor of stronger ones. But that shouldn’t mean all applicants should be prohibited from submitting their basketball team accomplishments because some applicants were poor players or didn’t have that opportunity.
Same goes for tests. Particularly when grade inflation at many high schools makes it difficult to differentiate between 4.0 students.
One 4.0 student might stand out because of their part-time job. Another because of leadership. Or musical talent. And, yes, one might have a knack for timed exams — why not be able to throw it out there for consideration?
The point is having a knack for a timed test that asks hundreds of questions rapidly is not necessarily a skill that comes in handy in any aspect of life (except to be able to answer more of these tests or go on Jeopardy). Even scientists, engineers, etc who have this aptitude never need to use it in real life ( except if they get tested for certification). If you want to be an artist you should be good at art. If you want to be an athlete you should be good at your sport. If you want to be a scientist, etc you dont need to know how to answer hundreds of questions, many not within your expertise, in a short period of time.
Yes school disparities and grade inflation is an issue and its hard to compare 4.0s but when a student is told their near perfect score isn’t good enough and its better they don’t submit it to a school whose scores are already inflated because a percentage have not submitted scores, then there is a problem.
Why is it a problem if there is no requirement to submit? I don’t see the issue. Who is hurt? Someone with a 1400 who “looks bad” if only those with 1500+ submit?
I don’t see that as any different than any number of other criteria — the regional Olympiad finalist vs the national Olympiad finalist, the district champion sprinter vs. the national champion sprinter, etc.
The only difference is that test scores are published so comparisons can be made.
I also disagree that test-taking skills have no connection to real life. As noted earlier in this thread, there is a correlation between high scorers and college success. You can argue about why this correlation is (and certainly it is not the ONLY predictor of success), but it isn’t completely meaningless or useless. Besides, if your talent is juggling, I would argue that you should be able to highlight that also, regardless of its relationship to your future career success.
Given that even at college we have mid-terms, finals, and other interim tests that contribute to various degrees to the final grade, I’ve wondered why “great test-takers” would not have a greater likelihood to continue to do well in college?
If that is the case, one would expect that AP scores and other standardized test (specially measuring content comprehension/analysis) obviously would not be a “guarantee”, but would indeed be somewhat predictive to how well a student might do in college - thus a meaningful tool in the admissions “arsenal”?
I have no stake in this argument - but observing my daughter’s 4 year of high school, and comparing it with her almost 4 years of college, I can’t help but see many parallels - even if the material is more advanced now, and independent self-study has become even more important.
College tests are not necessarily like the SAT and ACT. Indeed, they are often far from standardized, so some of the techniques based on the structure (as opposed to content) of the SAT and ACT may not be as generally applicable to college tests (although they may be more applicable to other similar format tests like the GRE, GMAT, LSAT, etc.).
Naturally!
My point was meant to deal with great vs. not-so-great “test takers”, which incidentally is then raised as an argument against standardized tests.
But seeing the proportion and weighting of multiple-choice tests in my daughter’s courses, or the obvious stress of “high stake” mid-terms and finals, and how some people more easily cope, or not cope with testing-stress, I have to wonder whether the critical factor is not whether some test happens to be standardized across the state/nation, vs. whether it is only “standardized” for that one Professor (e.g., for everyone taking the same courses).
Great test-takers might have an advantage, regardless whether a test happens to be standardized or not - in which case having a standardized test could, again, be predictive?
Obviously, issues like test anxiety will affect performance on any test, standardized or not.
However, some test prep for the SAT and ACT focuses on the structure of the test, rather than their content. That they are standardized and well known facilitates this type of test prep. Individual college instructors’ test vary more, so this type of test prep (or application of this type of test taking technique) is less common.
Understood. I appreciate that thought!
Makes sense that there is a way to “influence” the outcome of standardized test by (essentially) “gaming the system”, for those who can afford the resources to do that - which would not apply to tests given in a College course.
So a standardized test could make someone appear to be a “great test-taker” when they might just have been more “resourceful”.