<p>i think it's because the class of '09 is the largest-they have to make the curves ridiculous to have a good bellcurve</p>
<p>Please illuminate. I must admit that I had absolutely NO IDEA that each test administration is curved. Is this true??? I'm a parent, so maybe this is something they explain to you students. Does the fact that the tests are curved incentivize students to "game" the test by timing which ones to take? I'm actually trying to envision which tests would be most popular with which types of students...if someone has analyzed this, I'm all ears (or eyes, I guess).</p>
<p>It is heartbreaking to read the sad stories on this thread. Poor OP got only 2 wrong and was punished with a 780 CR, thereby crushing any hope of getting into a decent college.</p>
<p>The conversion scales are constructed objectively. The scales are made such that the same scaled score indicates the same level of ability across different test administrations and different years. In other words, a 650 in Math on the March 2008 exam, for example, means exactly the same thing as a 650 in Math on the January 2008 exam or a 650 in Math on the October 1995 exam. ETS (the organization who writes the test) knows exactly how difficult each question on a scored section is before the question is included as part of an official administration, since each question has been pretested at least once on a previous exam's experimental or equating section. Some forms of each test administration also contain a true equating section (whereas most experimental sections test out new questions), which is a section taken from a previously administered test (therefore, ETS already knows how the questions on the section perform). ETS compares the results from this section with the scores produced by the conversion scale for the scored sections of the exam to ensure that the scale is accurate and to make further adjustments to the scale if necessary. The bottom line is that the conversion tables for each test are not "curves" in the traditional sense. All students, rather, are measured against objective standards that ARE preset (all students since 1995, for example, have been measured against a national population of students back in about 2002). This is why the SAT is known as a standardized exam. Hence, there is no advantage whatsoever to taking the SAT in a certain month as opposed to another.</p>
<p>wow
soulside journey you did get cheated o.O</p>
<p>i got -2 wrong and 11 on the essay [which is like the same as your score basically] and got 790 in Writing</p>
<p>I was really mad at myself for the 770 in math though.
1...stupid...question...</p>
<p>i got 4 wrong (no omit) and 10 on essay = 750 writing
6 wrong on CR = 730</p>
<p>...i don't think the curve was that bad considering how many i got wrong - let me know if you disagree</p>
<p>wow! I'm glad I toook the Jan. SAT test instead of March test! The March SAT curve definitely looked much harsher than on the Jan. test. I'll tell you what I got:</p>
<p>I got 2160 total (CR: 720 M:760 W:680)
I got 6 wrongs on CR, 2 wrongs on math ( i think both were MC), and 7 wrongs on grammar part, with total score of 10 on essay part.</p>
<p>So I guess the Jan test was harder than the march, so the curve got easier. I thought it was the other way around before, so I took the Jan test. But anyway, i'm going to retake it again this fall to try to get a higher score.</p>
<p>about godot's post....if i got 6 wrongs on CR, then why didn't i get a lower score??</p>
<p>Adaman,</p>
<p>For the March test, 6 wrong on CR was a 700, a 730 was 4 wrong.</p>
<p>wildchartermage,</p>
<p>The CR and Math curves did seem especially harsh for the March exam. The CR and Math sections must have been exceptionally easy overall. You received a 720 in CR on the January test because 6 wrong (and no omits) means a raw score of 60 and it's not unusual for a 60 raw score to equate to a 720 on the CR scale.</p>
<p>Easier exams and harsher curves may benefit certain students (e.g., students who are extremely careful), but they can definitely hurt students who are careless. They also don't do a very good job of assigning precise scores to students at the very top end of the scale (710-800). In other words, a student earning a 740 on such a test perhaps actually deserves a 750 or 760, but the scale was simply not fine enough (and the test was not hard enough) to discriminate between those three scores. Hence, for a student of very high ability who tends to be just a little careless, the best test may be one that is harder and, therefore, distinguishes better between different scores at the very top end of the scale and is also a little more forgiving of careless errors. Soulside Journey: If you think you deserved higher scores but were just a little careless on the March SAT, I would definitely recommend taking the test again.</p>
<p>Hey Godot, thanks for the information and explanation. I think I was pretty careful on the Jan. test, though it seemed harder than I thought.</p>
<p>How did you know so much about the SAT scoring and that kind of stuff I wouldn't normally find in a SAT book?</p>
<p>I got 0 wrong and omitted 1 on the math and got a 770. It seems like I should have gotten a 770 only if I got 1 wrong. SHould I protest this?</p>
<p>
[quote]
How did you know so much about the SAT scoring and that kind of stuff I wouldn't normally find in a SAT book?
[/quote]
Well, it's basic understanding of human nature and test curves. And he also articulated his thoughts well.</p>
<p>
[quote]
I got 0 wrong and omitted 1 on the math and got a 770. It seems like I should have gotten a 770 only if I got 1 wrong. SHould I protest this?
[/quote]
No, it's a 770. -1 and -1.25 give the same score. Seriously, a math genius should know this stuff.</p>
<p>I called college board and they said the "scaled scores" are based on the level of difficulty of questions. so I guess if the test was easy ( like the March test) the curve will be harsh. and vice versa.</p>
<p>Yeah, I said that way back in post #8.</p>
<p>rockclimber, 1 wrong is the same as 1 omitted. you have a raw score. you get -.25 for every wrong answer and +0 for every omitted answer, rounded up. </p>
<p>therefore 1 wrong = 1 omit, 2 wrong = 2 omit, 3 wrong = 4 omit, etc.</p>
<p>sk9, yeah. unfortunately the curve is predetermined so its what THEY think is easy or not, not what performance dictates (because they already have projected statistics from their past experimental questions.) that politics passage was quite difficult, which they didn't realize.</p>
<p>wildchartermage,</p>
<p>It's my job to know everything about the SAT (check my profile). And it comes from an understanding of the definition of standardized tests. By the way, I meant to write, in my first post, that students are compared to a reference group from 1992 (or so), not 2002.</p>
<p>This site should seriously have a FAQ. The same questions seem to spring up all the time.</p>
<p>Noitaraperp, Thanks for your explanation. I really didn't understand how the scoring works and sure didn't appreciate Panic's sarcasm.</p>