<p>^Umm, just the entire point of the thread. </p>
<p>@xiggi, Hmm…You’re using self-reported student scores to prove your theory? The lack of concordance in the posted scores is probably because the students did poorly on their first SAT (maybe taken in the fall of junior year), then bailed and switched to the ACT. The ACT scores they’re reporting could indeed be the result of their third try.</p>
<p>The ACT used to be the SAT’s poor midwestern cousin, but now is a favorite of urbanites on the coasts because it’s a kinder, gentler test. No vocab, no penalty for wrong guesses, a more desirable test schedule and a more convenient score choice procedure (all of the above inspiring the SAT revamp). Plus, if you submit the ACT, many top schools don’t require subject tests.</p>
<p>There is no theory to prove as I was candid about the source of the observation, as well as its inherent limitation. The circumstances could have a wide range but there is one that is inescapable, namely that it represents the best effort of a student presenting an application to a college that is among the best in the country. </p>
<p>Fwiw, if I were to look for additional support for the perceived lack of concordance, I might very well borrow a few elements of your description of a kindler and gentler test with “No vocab, no penalty for wrong guesses, a more desirable test schedule and a more convenient score choice procedure.” In a way, this would not hurt a modest proposal that a 36 ain’t a 2400 and that the difficulty of scoring very high might not be the same in both tests. Oops, that might support my faulty logic! :)</p>
<p>Except it is the direction that the SAT will be moving in spring 2016. </p>
<p>Well @mobius911 , thanks for that enlightening comment. Very helpful.</p>
<p>I only have the experience of my own kids which are very close to the tables as posted. I personally didn’t take the ACT. </p>
<p>“There is no theory to prove as I was candid about the source of the observation, as well as its inherent limitation. The circumstances could have a wide range but there is one that is inescapable, namely that it represents the best effort of a student presenting an application to a college that is among the best in the country.”</p>
<p>What does this even mean? I am unable to decipher.</p>
<p>My 2 kids that took both were also very close to the tables although this last one had a 28 math vs almost all 32s in the other sections, while the math score was her highest on the SAT. Science was her highest score which makes sense (and isn’t an SAT section) but the CR and M scores were reversed, and pretty significantly. All in all though the tables were spot on. </p>
<p>^ interesting @tv4caster - my D also scored higher on ACT English and lower on ACT math (well ,the first time. The second they were the same), SAT was the reverse (PSATs also). Still averaged out to be about the same total per the tables, but interesting switch.</p>
<p>
</p>
<p>Umm, since you made the post, I assume that you assumed it was important to discuss.</p>
<p>But regardless, even if I get your point, I don’t agree. The fact is that the SAT-ACT concordance (such that it is), is used by USNews for rankings points. 34+ is still extremely valuable, even if the anecdotes show it is becoming inflated.</p>
<p>Few schools publish a table of admissions by test score, but the two test seem interchangeable to me based on what is available. But maybe I’m interpolating incorrectly. I learned at a public school back in the dark ages.</p>
<p><a href=“http://www.brown.edu/admission/undergraduate/explore/admission-facts”>http://www.brown.edu/admission/undergraduate/explore/admission-facts</a></p>
<p><a href=“https://www.amherst.edu/media/view/583079”>https://www.amherst.edu/media/view/583079</a></p>
<p>
</p>
<p>ADCOM 1
This kid has a perfect score, but it’s on the ACT, the slacker exam of the flyover states. </p>
<p>ADCOM 2
According to our new concordance chart, that’s only equivalent to a 2380…Reject!</p>
<p>Xiggi - there are students who do well on one test but not the other. Good guidance departments know this, and in the crowded suburb where I live they urge students to try both tests if they do worse than expected on one or the other. The tests are different, and to my opinion, require different skills. The science test is not a test of substantive science but a reading test involving the ability to interpret data. If you are not a fast reader, you can not do well, even if you otherwise are getting 5’s on AP science (which tests substantive material). If you are not a fast reader, the reading can also be a chore on ACT. Although the SAT is a longer test in time, the reading skills are different than the ACT was (I thought ACT is changing to have a comparative section like SAT). </p>
<p>As to SAT scoring, one skipped is a 790 on math usually, but with deductions for wrong answers, one wrong is a 750.<br>
Does this make any sense? You are penalized for guessing on SAT. For ACT one wrong or skipped would be the same score. On the AP, I believe they stopped deducting for wrong answers on multiple choice.</p>
<p>As to any multiple choice exam, if you can guess correctly, some people who really don’t know the material can do well. This is the inherent problem with this type of test. That is whether it is a standardized test or your college bio class (or psych or econ or any other multiple choice test.) I am not a fan of such tests at all. Perhaps I am not really “smart” in the way of standardized tests. I almost always have a problem with those tricky answers meant to fool you, whereas if you just gave me the math problem or asked me to write a short answer, I would get it right. To me many of the multiple choice questions seem to raise issues of semantics or other wording problems that are not specific enough for me.</p>
<p>I had a real test in college where the question was “bread comes from a store, T or F.” I dropped the course, and I never found out what the prof meant by that. This is not the type of academic work that I laud or find admirable.
I think we have no better answer than standardized testing for college admissions, but it is worth more than it should be in the whole puzzle at least at the very top colleges. </p>
<p>
</p>
<p>@anothermom2 I agree with you. I suspect that those who have never dealt with dyslexics or those with slow processing speeds cannot really appreciate the nuanced differences between the 2 tests in terms of how reading speed impacts the test.</p>
<p>I have one daughter who scored better on her SATs than the ACTs and another who scored better on the ACT. The only thing that makes it confusing is that both studied for the SAT and took lots of practice tests while they walked in cold turkey to sit for the ACT just to see how they would do. I think ACT is tied into the school curriculum more. SAT (old one) also is a bit of a game of logic and not just straight forward talk. Essentially the tests are testing different things. Perhaps a concordance table does not make sense and shouldn’t even be published because it’s not apples to apples.</p>
<p>
</p>
<p>Presumably it exists because each test has an incumbency / default advantage in a large part of the US. Colleges which accept one but not the other risk losing significant numbers of potential applicants who do not bother to take the “other” test, so they accept either, despite the fact that they test different skills.</p>
<p>I agree. I was not suggesting they eliminate one test; I’m simply saying they test different things. Clearly, I have two different learners- both excellent students- who performed differently on each exam.</p>
<p>Just wait until the new SAT comes out, which is supposed to be more like the ACT. New concordance tables? Glad my kids are done.</p>
<p>If the point of standardized testing is to be a predictor of academic success, then an apples to apples comparison is unnecessary. The predictive value of each test and its own statistical curve should be the only required measure. Really, the number score is meaningless by itself. What does the number supposedly represent? What is its correlation? The statistically valid ranges should be only thing of value. Does it matter if a 36 is the equivalent of a 2400 or does it matter that both have ranges which are supposedly valid predictors of likelihood of equal ability to succeed?</p>
<p>exactly</p>
<p>
</p>
<p>While I am not sure if it interests many, but would you mind clarifying the difference in scoring between ONE omission and ONE error in the grading scale … as you see it? </p>
<p>As far as I know, this is how it works:</p>
<p><a href=“Your SAT Score Report Explained – SAT Suite | College Board”>The SAT – SAT Suite | College Board;
<p>53 Correct 1 omitted = 53 points (54 - 1)
53 correct - 1 wrong = 53 points (54-1.25 is rounded to 53)</p>
<p>PS For the record, I used to think exactly as you did, but I was proven wrong in debating the last November test via PM about a 750 score. I wrote it was impossible for one mistake to cost 50 points. I was wrong! </p>
<p>
</p>
<p>Oh, I have participated in many discussions about the technical differences between the ACT and SAT. My starting point is that the perception is directly related to the individual. I have abandoned (a long time ago) the notion of trying to demonstrate how certain parts of the tests work. The best I could come up with is that the tests are challenging different types of ability, mental agility, and comprehension. Add a sprinkle of concentration and you have a wide range of reactions. People are smart in different ways! Yet the problem is to create a test that constanly delivers a 500 median or average score and can do so while maintaining its historical integrity for year to year comparisons all the way to its implementation. </p>
<p>People perceive the trickiness of certain tests in different ways. I am in the group that finds NO trickiness (to use that term) in the SAT but found the ACT to be more vague in its approach. People in “my” group also advocate to approacjh the SAT without guessing, safe and except in very precise circumstances. Having to guess usually means that the student did not set up the proper inquiry or equation.</p>
<p>All in all, it all boils down to one’s prior “preparation” and familiarity with the type of questions posed. Students who have solved puzzles for fun since they were toddlers find the math test unchallenging. Voracious readers who learned to read critically find the verbal parts easy to navigate. And the list can go on all the way to the people who freeze when seeing a multiple choice. For many, the adequate repetition of the same type of questions is beneficial. </p>
<p>On a personal note, I have often opined that there is room for a better test. That test should be an extended SAT (or ACT) offered once a year over several days and take place in the window currentlty dedicated to the AP. In my perfect world, most of the AP should be relegated to Saturday trips to a center. In so many words, a flipping of the a large part of the SAT and AP. The problem, of course, is that the AP represents a financial boon for our educators and the companies that target them – and a huge boondoggle for most everyone else. </p>
<p>But all of this is far away from my OP. </p>