Curve on the SAT - which test date is better?

<p>Either way, a 10 point difference won't make or break you. Seriously. I think people sweat too much over unnecessary stuff like this. Oh well, this is CC...</p>

<p>For once, I agree with tetrisfan. Jux get something above 750 n u'll be fine- unless u want to see 800/790 in order not to hurt ur ego.</p>

<p>In my opinion, College Board adjusts for the varying difficulty among tests, but it doesn't adjust for the vary capability among test-taking populations, and therein lies the rub. Some test dates are harder to get 700s because there are more smart students proportionately taking the test.</p>

<p>
[quote]
In my opinion, College Board adjusts for the varying difficulty among tests, but it doesn't adjust for the vary capability among test-taking populations...

[/quote]
</p>

<p>Correct! The CB does not adjust for how strong or weak the test takers are. Why should it? If a bunch of very smart people do well on a test which has already been adjusted for difficulty, then they should and will all get high scores.</p>

<p>
[quote]
Some test dates are harder to get 700s because there are more smart students

[/quote]
</p>

<p>No, that just doesn't follow. The person getting a 700 would get that 700 whether or not a bunch of smart students happen to take that test. The curve is not calculated "after the fact" just to ensure that, say, only a certain number of people score 750 and up, say.</p>

<p>my understanding of the way CB curves all of their tests is by fitting it to a normal distrition so that a certain percent of the population recieves and 800, (or a 5 if it is AP tests), and a larger percentage gets a 700 (or a 4 for AP) ect. so the curve is infact after the test, and depends on the population of test takers. The real question is where the chinese, or whoever excert a great enough influence on the curve to skew it in a certain direction (up for CR and W, down for M).</p>

<p>No. Here is how it is done:</p>

<p><a href="http://www.collegeboard.com/research/pdf/rn14_11427.pdf%5B/url%5D"&gt;http://www.collegeboard.com/research/pdf/rn14_11427.pdf&lt;/a&gt;&lt;/p>

<p>The percentile ranks DO change from one year to another (see above). I.e., a score of 700 may be the 93rd percentile in one year but the 91st percentile the next.</p>

<p>Here is a hypothetical:</p>

<p>An SAT is given in March with some curve. The exact test is given again in June. Suppose a bunch of very good math students take only the June test. Two very similar average people take the test, one in March and one in June, and both get the same number of questions right.</p>

<p>In the actual scoring method, each person would get the same score (say, 500), since the curve is unchanged by the appearance of the good math students.</p>

<p>In your method, the March person gets 500, and the June person scores, say, 480. But the ability of each student is identical. This would be very bad from the CB point of view: they want a given score to mean a fixed ability level from one test to another, and from one year to another, etc. Otherwise, the SAT is pretty useless for college admission purposes.</p>

<p>I suppose you could believe the above CB document is fakery. But why would they make their own test scores mean one thing one month, and another thing a different month?</p>

<p>I don't think College Board has any way of adjusting their scoring for different testing populations. They can only make adjustments for differing raw score bell curves. Let's assume that students who take the SAT Subject Tests (a very high achieving test population) often take the May SAT I so that they can take the Subject Tests in June. Let's assume that as a group, they score very well in terms of their raw scores. College Board will subsequently determine that the test was easy and impose an artificially harsh curve. The curve would be artificially harsh, not because the average question was easier, but rather because the average test-taker was more capable. Can anyone explain what is wrong with my analysis?</p>

<p>To answer the initial question- i don't believe there is much difference between the Oct. and Nov. curves. There is, however, a difference between these two fall curves (easier) than the spring ones (harder).</p>

<p>@Damaris: in this case, the CB will also look at the raw scores of these students on the equating section (the section on the SAT that doesn't count). Remember, the questions on the equating section are OLD questions: they were used before and the CB knows how well previous students did on them. If these new students are truly good, they will also have unusually high raw scores on the old questions as well. So, the CB will not be fooled into thinking that the test was too easy.</p>

<p>We may as well quantify the curves question. Here is how well you would have scored on SAT math with a raw score of 52 (that's two wrong):</p>

<p>03/2005: 760
10/2005: 760
01/2006: 780
05/2006: 760
10/2006: 770
01/2007: 760
05/2007: 770
10/2007: 750
01/2008: 760</p>

<p>I really don't see one season being harder or easier than another.</p>

<p>Fwiw, Bluebayou's explanations are right on the money. On the other hand, Echelon32 is incorrect. </p>

<p>The bottom line is that there are no difference among the dates and that the population that takes one particular test has NO influence. One could take the test with 1,000,000 hard drinking and hard guessing monkeys, and it would make any difference on the score. Of course, the use of the experimental and equating section would be ... quite minimal. :D</p>

<p>^What's cool about this monkey-wrenching scenario is that if one takes a test with that insouciant crowd his percentile will hit 99% even with the score of 1200. :D
BigIs and xiggi must be sick and tired of debunking the "best test date" theory in umpteenth time.
Here's a good old post:
<a href="http://talk.collegeconfidential.com/sat-preparation/419749-question-about-test-date.html?#post4931901%5B/url%5D"&gt;http://talk.collegeconfidential.com/sat-preparation/419749-question-about-test-date.html?#post4931901&lt;/a&gt;&lt;/p>

<p>. . . at least for some. I skimmed the report, linked by fignewton. The "correct" operation of the equating procedure (to ensure that the same level of performance yields the same score, on different test dates) is based on the assumption: For the students testing on a given date, the performance on the equating section is accurately representative on their "true" performance on the test as a whole.</p>

<p>We know that the equating sections must contain questions that have been used multiple times. This will tend to create some level of familiarity with these questions in the test-prep industry. Therefore, I'd hypothesize that students who have been extensively prepped might actually perform better on these questions than they do on the test as a whole (since presumably the whole test will contain some questions of a less familiar type). Because of the equating procedure, this will tend to make the testing population look "smarter" than they really are, thus shifting the mean up and giving people a small boost.</p>

<p>So, suppose that you are generally talented at test-taking, but you have not prepped. If my hypothesis is correct, I believe you should aim for dates when the largest numbers of highly prepped students are taking the test. Here's my thinking: The highly prepped students will perform better relative to you on the equating section, but they will not perform as well on the remainder of the test--by this, I do NOT mean that they will outperform you on the equating section, just that the ratio of their score to yours is likely to be higher on that section than on the rest. Your "underperformance" on the equating section does not matter to your score. Meanwhile, the "overperformance" on the equating section by the prepped students will boost the score on the 200-800 scale that is assigned to the mean, and this will tend to yield a boost for you, wherever you fall on the scale. In addition, the pattern of performance by the prepped students might actually reduce the apparent standard deviation within the testing population, thus giving you an extra boost for scoring near the top.</p>

<p>If CB could obtain accurate data on the level of test-prep for individual testers, they could determine whether or not my hypothesis is correct.</p>

<p>^^Last sentence, first paragraph, should be "accurately representative of . . . "
(The computer I'm using now is fire-walled to the teeth, and I don't have the "Edit" option available.)</p>

<p>I don't really see how someone could attain a "better level of familiarity" with questions from the equating section, whether extensively prepped or not. The questions are never released (except when being retired). Conceivably you could raise your familiarity by taking the SAT many times, but the CB surely has a large pile of equating section questions, and you can be sure that the equating questions that show up on a given test haven't been used for a while. You would have to cook up a scenario like: someone with a photographic memory would have to write down all the questions on her test and give them to a brother or sister for use in a couple of years.</p>

<p>March 2008 - easiest</p>

<p>It's a hypothesis, potentially testable by CB, though not by me. </p>

<p>But I'd surmise that the people writing test-prep questions for the prep companies have gained some familiarity with actual SAT questions by taking the test multiple times. Simply quoting the questions would be a copyright violation (I believe); but setting up questions that are similar in feel, or that have similar tricky elements is presumably permissible. For example, I think that in certain specific contexts, test-takers often overlook the possibility that a variable, x, might be negative; so the prep companies alert their students to that via their practice questions. Then their students would have probably have better-than-typical odds on related questions, on the real SAT. Also, I'm continuing to advance the hypothesis that the advantage winds up being slightly greater on the equating section than on the test as a whole. (For example, I'm not sure that Algebra II is featured in the equating section yet.) </p>

<p>Also, there have been reports on this forum of tests being repeated from a Sunday test date to a later Saturday test date. This suggests to me that the CB's stock of questions is not so huge. Similarly, posts in this forum have reported rumors of overseas students being "assigned" to memorize a small number of specific questions on the exams.</p>

<p>Wild hunch--we know that CB dropped the analogies and the quantitative comparison questions. Is it possible that there were not enough of them to ensure the integrity of the tests--because it became possible for people with good prep to anticipate some of them? </p>

<p>I took the SAT 38 years ago, but I can still recall a few of the questions. Also, I believe that some of the questions on a third-grade standardized achievement test my daughter took (quite a few years ago now) were substantively identical to the questions on a third-grade achievement test I took, many, many years ago. I would not have guessed this in advance!</p>

<p>An example of a geometric property that is often overlooked, where a prep course could give a student better odds of success with related questions: Circles can be co-tangent with one inside the other, as well as being externally co-tangent.</p>

<p>I took the June test this year and it seemed a lot easier than the January test. Whether or not the scales will normalize the two testings to give me the same score, a test that seems easier does inspire more confidence and gave me more time at the end of each section to look over my answers. I don't think anyone has considered this option.</p>

<p>Also, I would agree with QuantMech that more practice of SAT sections does benefit people who take prep courses. They are familiar with the format, time pressure, and there are definitely questions and concepts that repeat themselves. </p>

<p>Finally, I have always been under the assumption that each test date is normalized with one standard deviation being the equivalent of 100 points, which would explain the variability in raw scores to scores out of 800, but would also mean that scores are based upon how smart everyone else taking the test is.</p>