<p>Although score discrepancies may very well be attributed to a basic measurement error, that cannot always be the general assumption. Discriminating among separate performances must exist across the entire scoring scale.</p>
<p>
</p>
<p>
</p>
<p>These are inconsistent claims.</p>
<p>
</p>
<p>Absolutely.</p>
<p>Correlation does not imply causation.</p>
<p>^ That reminds me of my recent post with that exact statement in the “Be as unhelpful as possible” game on the High School Life forum. Seriously, though, whom are you directing that at? I did not notice anyone committing that fallacy.</p>
<p>Interestingly, in the various stats cited by Mifune, we find that at Stanford, transfer students with an 800 in critical reading were admitted at a lower than than transfer students with a 700-799. (799 actually being rather silly as you cannot score this.)</p>
<p>
</p>
<p>No they aren’t. </p>
<p>Have any of you guys ever heard of “significant figures?” It’s an interesting concept; maybe you should look it up.</p>
<p>
</p>
<p>That comes across as rather condescending.</p>
<p>Re Posting 182
No. These are not inconsistent. A correlation that is meaningless is quite possible. Common causal correlation is often meaningless.</p>
<p>
</p>
<p>Interesting until one realizes the numbers of people that the data correspond to–only one or two people were accepted with a CR score of 800.</p>
<p>Non-correlation claims between the SAT’s and its effect on admission probability are not quite correct.</p>
<p>From a portion of a previous post:</p>
<p>
</p>
<p>
</p>
<p>Actually, it’s impossible in our context.</p>
<p>
</p>
<p>Frankly, I felt that your consistent refusal to acknowledge the point deserved a hint of condescension.</p>
<p>The idea that a weak correlation justifies minute differentiation is, again, absurd.</p>
<p>
</p>
<p>As silverturtle correctly points out, transfer statistics at the upper score levels do not quite reach a level of statistical significance.</p>
<p>
</p>
<p>The unavoidable implication of your assertion is absurd: at what point should a college start discriminating among different scores if not at the difference threshold facilitated by the metric?</p>
<p>Re: posting 177 (Mensa/SAT/IQ)
You have it right, bovertine. I considered fooling with Mensa back in my mispent youth. My old SATs were good enough but I couldn’t dig up the official paperwork so I think I took their test. (or maybe I took the test then eventually found the paperwork). I never ended up going to any meetings because the stuff I got was just unappealing. </p>
<p>I think we can all agree that the SATs are no longer linked strongly to IQ (and may never have been anyway). The way in which SAT scores and IQ scores are calculated is similar - using test data and statistical analysis to produce a numeric scale. But I think trying to talk about the math/stats of the SATs by using IQ analogs just muddies the waters.</p>
<p>
</p>
<p>Silverturtle, as nemom and momofthreeboys correctly pointed out, my post re IQ was using an example or an analogy, and was never intended to be “literal.” One of the difficulties I have had with the train of discussion online is that you seem to have an over-literal response to my posts – I do not know why this is. Maybe its a limit of online posting – maybe its a reflection of the fact that you are a teenager trying to keep up the flow of conversation on the parents thread, and people with middle aged brains tend to engage in discussion on a different level. (“We are better at getting the gist of arguments,” she says. “We are better at recognizing categories. And we’re much better at sizing up situations” See [Fresh</a> Air Interviews: Writer Barbara Strauch : NPR](<a href=“http://www.npr.org/templates/story/story.php?storyId=125902095]Fresh”>Fresh Air Interviews: Writer Barbara Strauch : NPR) - quoting Barbara Strauch, author of The Secret Life of the Grown-up Brain: The Surprising Talents of the Middle-Aged Mind ) </p>
<p>However, one irony of this situation is that I feel that the SAT test format tends to reward students who are very literal and prosaic in their thinking process. It’s all about getting the “right” answer in a time-limited fashion – kind of like a game of Jeopardy – so it tends to trip up highly imaginative or nuanced thinkers, who are also likely to be bored by any sort of rote learning or study, and thus averse to extensive test-prep. So there is a rather significant subset of very bright students who flub the test because they were overthinking the questions or ran out of time… and then don’t bother with a retake because they don’t have patience for the process. </p>
<p>And that’s another aspect of the “blunt tool” – not only does the test over-predict the potential of student whose high-end scores are obtained through manipulation (extensive prep and repeat testing), but it under-predicts the potential of students who are averse to the standardized, multiple-choice test format. (How many fit this group is anyone’s guess – but that’s why colleges stick with a holistic admission process. Essays and recs can tell an entirely different story than the test scores, and its a story the ad coms want to know).</p>
<p>Perhaps worth considering:
[Study</a> on Accuracy of SAT Prompts Schools To Accept Other Tests - The Tech](<a href=“http://tech.mit.edu/V128/N44/sat.html]Study”>http://tech.mit.edu/V128/N44/sat.html)</p>
<p>RePost 198
calmom has some good points. I know my child had to work on the writing section because he often found all the choices for sentence improvement to be bad.</p>
<p>
</p>
<p>The point at which the difference in scores is larger than the natural variance between different sittings. At the top end of the scale, a question or two per section is a normal degree of difference. That can result in a difference on the order of 100 points overall. I don’t think that’s an absurd conclusion at all. And again, with the flawed structure of the test, it is far easier to reach this tail than it should be.</p>
<p>
</p>
<p>I am sure that there is a strong link, though probably not as strong as it had once been (prior to 2005). The study I alluded to earlier demonstrated quite clearly that there was a strong link prior to 2005; because the test is not fundamentally radically different now, I assume the link remains similar.</p>