SAT and ACT concordance tables -- A revision might be necessary

<p>xiggi, I actually agree with the sentiments in this last post. I personally hold little value for bubble tests. Our kids do them because it is a hoop they are expected to jump through. I do not believe that questions that can be answered in seconds have much value. I homeschool our kids b/c critical thinking is what I want their educations to revolve around. Math questions that take hours to solve, yeah, those I like. ;)</p>

<p>I’ve share this story on this forum before, but here it is again anyway. When our oldest ds was a high school sr, a woman at church asked him where he was planning on going to college and what he wanted to major in. He told her where and chemE. She looked him straight in the eye and told him he needed to plan on a different major b/c her ds scored a 36 on the ACT and had to change majors b/c the dept was way too difficult. Our ds, who scored much lower, graduated from that school with a 3.65 gpa in chemE.</p>

<p>The tests are an indicator. When they are raised to the level of “THE” indicator, their actual value is reduced.</p>

<p>

</p>

<p>No doubt, and that is because spatial-temporal reasoning comes to you naturally (as it does to many future engineering types). However, a large % of students need to learn spatial reasoning; nothing even close is taught in school. Geometry proofs used to be a good start, but some/many schools seem to have gotten away from making students do proofs. Regardless, that is only one class out of four years of HS.</p>

<p>Interesting discussion, thanks all! </p>

<p>I was struck by – and totally agree with – this comment by xiggi: “All in all, it all boils down to one’s prior “preparation” and familiarity with the type of questions posed. Students who have solved puzzles for fun since they were toddlers find the math test unchallenging. Voracious readers who learned to read critically find the verbal parts easy to navigate. And the list can go on all the way to the people who freeze when seeing a multiple choice. For many, the adequate repetition of the same type of questions is beneficial.”</p>

<p>For my son, it was just like a game, and he loved to play games. He did take the SAT three times spread over several years, i.e., for the talent search in 8th grade, then PSAT and SAT. But he never prepped for the tests (and scored 3,910/4,000 in the 5 Sat I and Sat II scores that he sent to colleges). I’m convinced that one of the main reasons for his improvement after 8th grade is that in the talent search test he didn’t guess at all if he didn’t know an answer for certain. For the later tests, he knew the rules but also was a few years older. After he’d just taken the Math 2, when he came home I asked him, “How did you do?” “Oh, I got 800.” “How do you know that already?” “I answered all the questions, and I had time to check my work.” (He didn’t get any answers wrong, so he had a fair margin for error in getting an 800.)</p>

<p>My daughter was more the artist than the logician. She was never a game player. She didn’t prep either but did well enough on her SAT’s (600’s) for her art school intentions (she did attend art school). But when she decided to apply to MBA programs several years later, that was a different story. She prepped (self-directed using PR’s software); she took an additional college math course (hadn’t had math since high school); and she got an excellent GMAT score (720) that put her credentials in line for her admission to a top business school despite her “artsy” background. Had she not practiced and prepped, she’d have had very different options for continuing her training and career.</p>

<p>There seems to be consensus that the two tests cater to different skill sets and test-taking abilities. Has there been any research (“peer-reviewed journal” type stuff) comparing the predictive power of the two tests on college success (as defined by GPA or something similar)? Even more interesting would be to compare how students did who showed a large difference in percentile rank between the two tests (xiggi’s 1900/34 examples and the like).</p>

<p>There would be some challenges to normalizing the data (different schools, majors, test prep, number of times taking test, which year taken, etc.), but that’s probably manageable. Unfortunately, the stakeholders who control the data probably have zero interest in having such a study conducted, so I’m not holding my breath.</p>

<p>

</p>

<p>There were a lot of discussion about this issue when California’s Atkinson harpooned the College Board and made an incendiary (in terms of academia) at a conference. Based on his recent obersvation of his granddaughter’s insight, he opined that changes were needed. Of course, the opinion of one of the largest (real) customers of TCB tends to make tsunami waves. To follow up, the UC system ordered a number of (what I called) mercenary studies to demonstrate the poor correlation between test scores and success in college. In answer, Uncle Gaston made sure to dig out studies (including a metastudy) about the predictive power of the SAT. A reasonable summary was that the SAT is not a better predictor, but that GPA + SAT is a better one that GPA alone. </p>

<p>Most of the debate died as soon as UC offered a compromise that became the 2005 SAT. TCB uncorked the champagne in the Gulfstream as their obscure SAT Writing test was to become imposed on millions. The UC was none the wiser as they continued to make Kerry look like a mild flilflopper as they contunued their policy of supporting the Subject Tests before abandoning them in attempts to shore up their dysfunctional existence. </p>

<p>All in all, it took a decade to undo the damage of the “new” SAT that brought absolutely nothing through an asinine short essay and a cursory review of basic grammar. It also set back the need for reasonable studies on why standardized tests are helpful in predicting college success. </p>

<p>One can bet a few dollars than in 2025, we will still be hoping for such comprehensive studies. With the changes of the SAT, it is safe to assume that all the data between 2015 and today will be considered unworthy of analysis. </p>

<p>I don’t get the issue. I thought Modius911 laid it out correct early in this thread when stated:</p>

<p>“Since >1.5M kids take each test, the concordance just comes from score percentile matching. That presupposes that one test population isn’t “smarter” than the other, which at those numbers is a pretty safe bet.” </p>

<p>This is simple stats - anecdotes are meaningless.</p>

<p>I think D told me she got 1 wrong in the Math SAT and got 770.</p>

<p>

[quote]

I don’t get the issue. I thought Modius911 laid it out correct early in this thread when stated:</p>

<p>“Since >1.5M kids take each test, the concordance just comes from score percentile matching. That presupposes that one test population isn’t “smarter” than the other, which at those numbers is a pretty safe bet.”</p>

<p>This is simple stats - anecdotes are meaningless.
[/quote[
I agree. </p>

<p>

</p>

<p>Depends on the test but not the case for November 2014.</p>

<p>

</p>

<p>The plural of anecdote might not be data, but statistics based on older numbers tend to become stale. Much has happened in terms of test taking since the concordance tables were compiled. For instance, how do we compare the SAT math test of November 2014 to a related ACT test? What is the equivalent of 1/54 error on the ACT? </p>

<p>One can look at statistics to explore how the mean and median move, and others can look at anecdotes to challenge foregone conclusions. To each his own! </p>

<p>Look at percentile of score distribution in both tests. There is no need to make up a theory based on a few out-liner observations. Obviously one may do better in one test or the other. The percentile distribution is telling you what fraction of test takers are getting that score or higher. I am sure the percentile for SAT 1800 and ACT 34 do not match up at all.
For the adcom, they are not relying on this table as each school may put emphasis in different section scores. Some school mainly consider CR+M in SAT for instance. Some engineering school would look closer at the Math score. So there is no simple conversion chart that may be universal for all school anyway.</p>

<p>I doubt that the concordance would ever be anything but based on the percentiles that correlate, not what one wrong gets you etc. Although if you get only one wrong in a section on any of these tests, it will get you a very high score, that’s a given. I am wondering if the concordance tables really mean much anyway. Isn’t it the percentile that is the important factor?</p>

<p>If we want to ask why a student would be in the 99% range on one test and below 80% on a different test, it would have value to that student. There may be learning issues that were never identified. Whether one wrong shows that a student is less qualified than a perfect score is also a philosophical debate. While the Sat folks talk about score ranges, it seems that quite a few of the tippy top colleges do not find this to really be the case. When fully 25% of the admitted students have 800’s on a section, it seems like it is important to be perfect - for them.</p>

<p>Some high scorers do not match their college performance to the high score on the test. Many others do. Some who didn’t score so high go on to do great things anyway. </p>

<p>I am curious how the concordance tables are generated. There are many tests given each year, and the percentiles move around. Maybe not much, but enough that test-to-test results could change the table. Do the two groups (College Board and ACT) base the table on the previous year’s average percentiles, a 12-month rolling average, or what? How often are the tables updated? Or have the two groups agreed on target percentiles for each score (1400 is 98th, 33 is 98th, etc.), and it is up to the two groups to ensure that those percentiles stay more or less constant?</p>

<p>I know that ACT publishes an ACT-to-SAT conversion. Does The College Board provide an SAT-to-ACT table? Or do they take the stand that they are the granddaddy and they don’t compare themselves to any other test?</p>

<p>@mobius911‌ You asked “Does The College Board provide an SAT-to-ACT table?” Answer is Yes: <a href=“http://research.collegeboard.org/publications/content/2012/05/act-and-sat-concordance-tables”>http://research.collegeboard.org/publications/content/2012/05/act-and-sat-concordance-tables&lt;/a&gt;&lt;/p&gt;

<p>Some states require all public school students to take the ACT. I’m not aware of any that require all students to take the SAT. Doesn’t this create a real difference in the populations being tested and make percentile comparisons rather meaningless? </p>

<p>IDK but our HS, and many others, offers/requires the PSAT in school and I think that pushes a lot of kids towards the SAT as well.</p>

<p>It seems obvious to me that the basis for the concordance tables is flawed.</p>

<p>Those tables tend to peg the 99th-percentile SAT score to the 99th-percentile ACT score, the 80th-percentile SAT score to the 80th-percentile ACT score, etc. However, this equivalence only holds if you assume that the two pools of test takers have the exact same ability distribution, of which there is no evidence.</p>

<p>Like xiggi, I have noticed that posters on this board who’ve taken both tests tend to have ACT scores too high for their SAT scores more often than the opposite. I don’t know if this is actually a statistical trend, but the point brought up above about some states forcing all public school pupils to take the ACT may suggest it is. If it is, it would suggest more able students take the SAT than the ACT.</p>

<p>I think the only way to come up with a reliable concordance table, barring forcing every student in the country to take the SAT and the ACT, would be to look at the scores of the students who’ve taken both tests and see how they correlate. I don’t think ACT would want that to happen, however, because of the potential for embarrassment.</p>

<p>Concordance tables go out the window next year or the following anyway, when the Sat does its big change, no? It’s going top be a totally different test (from what I’ve read, more act-like).</p>

<p>OHM, the new SAT is still a WIP but it is not inching to become a ACT clone. In many ways, it leapfrogs the ACT to align itself with the maligned Common Core. It might LOOK like a cousin of the ACT but it appears to be aiming to be harder and even more complex than its current version. The questions will receive and require more time. </p>

<p>The play by TCB is clear and it amounts to regain an absolute popularity among its true customers, namely the colleges and their adcons. That is very different from seeking the popularity among students. </p>

<p>TCB is gambling by tying its fortune to the success of the Common Core. My money is on the Princeton boys. </p>

<p>Actually, from that link in post 53, the college board says “Both tables are based on scores from students who took
both tests between September 2004 (for the ACT) or March 2005 (for the SAT) and June 2006”</p>