Yes, regardless of what you believe with respect to the following questions:
Will colleges of interest be SAT/ACT required, optional, or blind in the future?
Will scholarships of interest be dependent on SAT/ACT scores in the future?
Is use of the SAT/ACT desirable for college admission?
The way for current high school students (other than seniors) to maximize future options and be ready for any colleges’ SAT/ACT related decisions is to take the SAT/ACT when they can (and PSAT for juniors who have a chance to be in National Merit range). That way, all contingencies are covered:
College or scholarship is SAT/ACT required: can apply.
College or scholarship is SAT/ACT optional: can apply and choose whether to send scores depending on if they are good relative to the college or scholarship.
College or scholarship is SAT/ACT blind: can apply, just not using the scores.
In contrast, someone who does not take the SAT/ACT cannot apply to anything that requires them, and could potentially be disadvantaged at an SAT/ACT optional college or scholarship if the SAT/ACT score that they would have gotten would have helped the application.
CC posters often seem to overestimate how much test scores are predictive of college performance. For example, the previously linked discrepant HSGPA/SAT study found that among students with high HSGPA / low SAT, HS GPA in isolation only explained 13% of variance in FY college GPA. 13% of variance explained is indeed a very poor prediction. However, the combination of HS GPA + SAT only improved it from 13% of variance explained to 15% of variance explained. The combination of HS GPA + SAT also yields a very poor prediction of first year college grades for this group. You really need to look at a lot of other factors besides just GPA + SAT stats in isolation to estimate if a particular student is well prepared for a particular college or not. If you have a good understanding of the HS preparation and student, SAT score shouldn’t add much to that prediction.
The reason that is most often quoted on forums is that the A grade is not reflective of truly understanding the material, making them unprepared for college. However, there are many other possible contributing factors . Not all tests are the same. Many tests are very different content from HS, and some HSs have classes that better target the college admission test type questions, in some cases to the point of being simiilar to a prep class. Students who speak English as a 2nd language often go slow enough to interfere with the EBRW/CR type tests. Similarly students who think carefully about math questions may do relatively poorly on the math tests that require racing through simple questions at 1 question per minute, like the ACT . Many students get stressed over high stakes tests or get little sleep on the previous night, which can make a single test not representative of HS performance. This is particularly influential on tests where a few careless errors will knock you below 25th percentiles at highly selective colleges, such as often occurs on the math sections. I could list many others
“What I have never understood is, how can a student who normally gets A’s in class and understands all the material not be able to do well on a high stakes test? The pressure and stress is there on high school tests in most schools so why can they get a 4.0 but be unable to follow through on standardized tests?”
I don’t know but this is my '22 daughter. She has a 3.98 UW. Currently has a 100 in AP Psych, 97 in AP Bio, 99 in Honors Pre Calc, and her PSAT in October was an 1100. So, selfishly, we’re on the TO bandwagon.
For some, those who have excellent grades and lousy tests, I would spend time on the other aspects of the application process. If I was sensitive to the COVID situation, I would refuse to test at this point. At a minimum, colleges will show a level of respect for those individuals.
Don’t underestimate the power of grit. Some of the most successful people are hardworking and resilient. What they got on their SAT’s - who knows and who cares?
Well I would certainly agree that kids for whom English is a second language would/might have an issue. Likewise for the timed effect. (I’ve always thought it should be take as much time as you need, you know it or you don’t).
But my general assessment is that most high schools are not the best learning labs for highly selective colleges. When kids land in top colleges they find themselves among the high able ( aptitude wise) and the highly trained (kids who have attended excellent schools all their lives). So, kids who have learned to memorize information will inevitably find themselves at a disadvantage.
There are lots of issues. I think more data is more information. There is a ranking of high schools across the US ( actually there are several). Some include factors that aren’t related to academics in their formulation). Others use some other combinations. Unless or until we have a baseline of high schools with an agreed upon basis to value, we have no national score. The ACT/SAT fills in here.
Is it perfect? Nope. It is close to perfect, nope. But it gives a piece of data.
Let’s face it, there is no magic formula for figuring out who is going to be the next zillionaire and give the college a perpetual endowment. What the schools need to do and what they have done is base individuals on what they have done, and what they are likely to do. Using the Harvard data over and over only tells you about Harvard applicants. It’s a small piece of the college puzzle.
So again, I’d add use the scores and if folks don’t like them, they can likely explain why their scores are low.
TO is certainly understandable during Covid but once the tests are available again, unless the schools go test blind they will look at the scores. The absence of which may have an affect for admissions depending on the school. We have friends whose kids do very well but aren’t exceptional SAT/ACT test takers. They did well in their AP tests however so those test scores will help and should be submitted.
Yes, that’s true. Grit is hyper-important to life success. Tough to measure at age 18, but usually it’s beginning to show or has shown itself. Ever see a group of XC runners in rain and snow? I saw a group of kids running in the state championships across a packed sheet of ice, into the woods, through 6-10" deep ice water. A few lost shoes, a few were bloody. Every. Single. Kid. Finished. That’s grit.
I think where you might want to think more is whether you truly believe “A” students who don’t test well are in that bucket because they are memorizers. Memorizing, in fact, can really help you on the SAT. I think multiple choice tests are truly not the way to find the best and the brightest. Besides true/false, however, they are known to be the easiest to grade. Wonder if that matters?
Registered my S for the May SAT because there were no seats within 50 miles of our house for the June date. Either test centers are closing or students are registering for the exam. My guess is the latter. AFAIK, our district is still planning to administer the state-sponsored SAT this spring.
Given the large volume of SAT/ACT, the multiple choice format certainly does speed up grading, even if it may not be optimal for actually assessing what is meant to be assessed and may encourage the development of multiple-choice-test-specific skills that are less applicable elsewhere.
I don’t necessarily think that low scorers are memorizers, at all. I think that many schools fail to develop the skills needed based on standards, and many also fail to retain information that was learned. There are so many factors why kids do well or don’t
I agree with you that SAT’s are not the best means to find the best and the brightest. There are other tests for that.
But they are useful as a baseline.
I think given what happened last year, that many families have signed up for more tests than they normally would in a normal year. Once things start getting back to normal ( we can hope), more seats will become available.
It was sad to see so many kids driving really far in order to get a test score.
There is no question that looking at HS GPA in isolation without considering anything else is imperfect, and different students have different degrees of course rigor and HS preparation. However, it’s also important to consider how much value SAT/ACT has towards reducing these imperfections. Just because something is standardized, doesn’t mean it adds a lot of value beyond the other available evaluated criteria.
For example, suppose the CollegeBoard decided to make a multiple choice standardized test for use at music conservatories, so they came up with a test that evaluated things like correctly identifying musical notes without careless errors rapidly, in a multiple choice format. I expect there would be a non-zero correlation between scores on that test and being successful at a music conservatory, but that doesn’t mean that the test adds a lot of value beyond their existing approach since being successful at a music conservatory requires a completely different skill set from just being able to identify musical notes correctly on a multiple choice test. The test could theoretically catch kids who applied to a music conservatory in spite of not being able to read music well, but those kids would likely be flagged by other components of the application (if applying to field that requires reading music), under their existing admission system, so the test would not add much value beyond the existing admission system.
It’s a similar idea for college admission at highly selective colleges that consider a variety of factors beyond just HS GPA in isolation. Yes SAT/ACT could theoretically catch some kids who apply to a math-intensive field in spite of not knowing basic algebra/geometry well . But those kids would most likely be flagged in numerous other areas of the application besides just score or HS GPA in isolation, so the score itself doesn’t usually doesn’t add much value beyond the combination of other existing factors.
The post you replied to did not mention Harvard or the Harvard data. It instead mentioned the “discrepant” High GPA / Low SAT study, which involves tens of thousands of students who attended 23 colleges, most of which were not highly selective. This CollegeBoard study that found SAT score only added a 2% incremental improvement in
% variance in FYGPA explained beyond looking at HS GPA in isolation, among all 3 discrepant/non-discrepant HS GPA / SAT combinations. I expect the incremental improvement would be far smaller than 2% had they consider other criteria beyond HS GPA in isolation, such as course rigor.
The musical explanation does not apply. One knows that musical talent is a combination of knowledge and ear. It can, however be discerned by listening without seeing the applicant. This is know done for major symphonies with the result that more women than ever have gotten spots. So I am not sure why are bringing that up.
For academics, there is only knowledge. You know the answer or you don’t. Given the short time frame of the standard test ( what 5 hours), one cannot determine who can write a classic novel or do any or 1,000 other things which may make them valuable to the academic community when they actually get accepted. Then for holistic admissions, which is a given for top schools, they add in all the rest. Everything is also important to the final decision. There are students in every category with near perfect everything. But there are fewer with the high level combo. From these, the class is chosen.
Let me change the sentence regarding all of these reports cited then. All are written by people. Some have built in bias. Some have formulas which cannot be applied across multiple instances. What I was saying is to continue to repeat the same notions doesn’t forward the conversation. I like seeing information that is factual in nature, but I’m not going do go and read every report every time something is cited. Wouldn’t it just be easier for all to make a point and cite a simple statistic to support it?
And I have seen the Harvard report cited ad nauseum on CC. So I am a little tired of that one in particular. ( Though I did find it fascinating reading when it came out a while ago).
In everything I’ve seen, HS GPA is the best single numeric predictor of first-year college success (with an important caveat, mentioned below).
Perhaps aside from someone coming from an imagined high school where everyone is automatically awarded an A, it does seem that it doesn’t matter what the rigor of the high school curriculum was, as long as the rigor met the minimum necessary to be at college preparatory level (e.g., a decent amount of algebra).
The caveat: Students who end up in remedial math or writing courses tend to not succeed, no matter their HS GPA. Thus, it seems that there is a necessary minimum amount of HS preparation needed for success in college, and that may equal rigor, but it’s an open question (and a difficult one, especially given the inequities involved in placing students in remedial coursework even when they might not need it).
Being successful at an “academic” college is far more complex than “you either know the answer or you don’t” , such as the type of evaluation on a multiple choice SAT/ACT type test. For example, in a large portion of majors, college grades are primarily based on writing long papers or reports, sometimes as part of a group project. Evaluation of a paper is not simply a multiple choice question where “you either know the answer or you don’t.” Among the majors that do emphasize objective answers, the questions are rarely simple SAT/ACT multiple choice style questions. In many cases, it is not practical to know the answer without a complex evaluation. And just listing the correct answer without any work may result in a poor score.
Being successful in college also requires more than just knowledge, and certainly beyond the extremely limited subset of basic knowledge covered on SAT/ACT. It requires things like choosing to attend all classes, do all assignments, prioritize spend time studying the material and preparing adequately, … often instead of other options for spending time, such as partying with friends. I also believe that being “successful” in college involves more than just getting the highest possible first year grades, prior to effects of a curve. Out of classroom activities can be as important or more important than in classroom ones for what I’d consider “success” in college.
In any case, the main point of my earlier post was not that music conservatories were the same as academic colleges. It was that SAT/ACT adds little to the evaluation beyond the combination of other available information used by highly selective colleges, as occurred in the other hypothetical example.
Already responded to above by a couple people, but I don’t think this prediction would actually pan out on somewhat different grounds.
No matter how much a certain slice of upper- and upper-middle-class Americans salivate over Harvard, there are a sizable number of HS graduates—in fact, the majority, given what we know about the population—who self-select out of applying to Harvard, and would even if there were a random-draw lottery.
Their reasons are myriad—geographic considerations, family responsibilities, choice of major, weather, allegiance to the working class, whatever—but no matter what hanging around on CC might make you think, not everyone lusts after Harvard.
Wrenching this back to the OP’s question, I’ll also note that many of the students who don’t want to or can’t go to Harvard (phrased here as two separate groups, but better thought of as two somewhat overlapping groups) are also in groups that have historically done less well on standardized tests. It does make me wonder if the SAT/ACT, having been designed to reward a certain type of student (whether that’s in terms of social background or academic background or cognitive style or whatever, it doesn’t matter) in the name of that sort of student being desirable, then becomes a self-fulfilling prophecy by presenting those “desirable” students as numerically better—and now colleges, realizing by virtue of the scores not being available that they don’t really need that sort of feedback loop to build desirable entering classes, can just cut it out and have one less thing to worry about.
And one less thing to worry about is valuable when you’re evaluating tons of applications.
Oh, dear, please tell me you don’t actually believe this.
I am an academic. I teach and conduct research in a quantitative field. The idea that you either know the answer (let’s be charitable: or can figure out the answer) or not? No. Just no.
I mean, yeah, maybe in an introductory class, but even there students are being trained to think a lot more creatively than that, or at least they should be.
And perhaps even more so in fields like, say, literary analysis.
So no, I strongly reject the value of a standardized test as something that tests knowledge in any ultimately useful sort of way for academic study, because knowledge isn’t simply knowing something or not, it’s being able to apply principles in creative ways. (And it’s often possible to test that even in a multiple choice format, but it isn’t easy!)
And that brings to mind something that everybody on this thread (including me) has been ignoring: College success beyond the first year.
SAT/ACT scores are—as @Data10 has been repeatedly pointing out, and as everyone who works with college student success metrics knows—very minimally predictive of first-year college success. What gets less attention is the degree to which SAT/ACT scores are utterly nonpredictive of college success beyond the first year.
I would suggest that part of the reason for that is that it isn’t testing the sorts of knowledge or use of knowledge that are useful beyond very introductory levels of college study.
So perhaps that’s another principled reason for jettisoning SAT/ACT scores? I mean, is it any use for a student to be admitted based on a metric that doesn’t actually predict their ultimate success? (My suggested answer: No.)