<p>I recently looked at all of my kids' SAT scores, since the youngest just got scores from the May test and I was curious about how they compared to siblings' tests. I hadn't looked at all of the scores at once before. I was surprised to discover that for each kid, their scores in October as Seniors went down a little from the Junior year scores.</p>
<p>I asked them to do an informal survey of their high school and college friends. Each person told them that their Senior SATs went down. Some of them reported going down a lot, and some only went down by 10 points. Yes, a few did report scores that went up. Out of 30 kids, however, more than 2/3 reported that their Senior year October scores went down.</p>
<p>Of course, this is purely anectodal. I'm sure many of you will report scores that went signficantly up -- still, I was wondering whether this could be a trend? Perhaps more competitive students take the October SAT, so that the curve is higher and negatively affects other kids? If so, then perhaps taking a later SAT as Seniors might help those kids curve into higher scores? A stretch from my limited sampling, but I was wondering....</p>
<p>I’m wondering whether there are other factors in play here.</p>
<p>The spring tests come at the end of their respective coursework, so it’s all fresh (relatively) in the students’ minds. Whatever tests they take in October, they’ve pretty much self-studied.</p>
<p>I just realized I’m talking about subject tests and you may be referring to the reasoning test.</p>
<p>This is just a theory. October test is full of seniors who probably have taken SAT once or twice already. Many of them study or have tutoring course in the summer, and they are gunning to improve their scores. The bell curve shifts slightly because of this. Although, some of the really good testers probably won’t have to take it again in October because they already have their good scores. So maybe there is an opportunity for second tier people closed to the top to improve their scores. After January, all the juniors who take it cold for the first time probably shift the curve back to normal.</p>
<p>No, I don’t think it has nothing to do with any harsher curves or harder tests. To the best of my knowledge, the curves are predetermined before the students are tested depending on their difficulty level, and it doesn’t adjust based on how well most people did.</p>
<p>The SAT curve (which calculates a 200-800 score from the raw score (number of questions answered correctly - 1.25 * multiple choice errors)) does not depend on who is taking the test, whether an infinite line of monkeys or a group of math professors.</p>
<p>Using the (NOT pre-determined) performance of the test takers on the equating sections (also referred to as “experimental” sections), which contain questions used previously, the difficulty of the test is determined. The test difficulty alone determines the curve, which is purely meant to make a 700, say, mean the same no matter which test was taken. The scores (and the corresponding percentiles) of the test takers are not adjusted to match a bell curve.</p>
<p>If in fact seniors do less well in October, a reasonable explanation would be Highland Mom’s …</p>
<p>My October score in my senior year went up. Unlike my May test from junior year, I didn’t prep at all. I didn’t even get to bed early or eat a good breakfast. Go figure.</p>
<p>fig, thanks for the explanation, I guess it was a bad theory from me. Anyway, what you are saying is that the curve is set based on other previous tests from you don’t know when. So you can randomly gets bad or good curve depending on who took the experimental tests in the past. Although I think this is a fairly decent way of going about it by CB.</p>
<p>The basic idea is this: suppose the raw scores are lower than average for a given test date. Is this because the test was harder than average and the students were typical, or is this because the test was average but the bunch of students taking the test this time were not very good.</p>
<p>In the first case (hard test), the curve must account for that by giving a higher scaled score than usual for a given raw score, in order for the test to be fair. In the second case (poor students), the curve should simply be typical, and many students will get poor scores.</p>
<p>The CB sorts these two cases out (in general, there will be a mixture of the two) by using questions in the experimental sections that have been used before (i.e., verbatim repeats), and comparing the performance of the current students on those repeated questions with the performance of students on the same questions from a previous test date. This allows for a separate determination of the quality of the current students. All this can be done only after the test has been administered.</p>
<p>Hmm, this is interesting and a little different from what you said before. It seems that you did not explain how the “hard” test is determined, you just explained how the “quality” of current student is determined compared to some population in the past. Even with what you just said, it is a comparison of current set of students with some previous set of students, no matter how you slice it, there is a possibility of things getting skewed slightly one way or another.</p>
<p>^Conceivably, yes, these methods can fail. For example, if only a bunch of math professors were taking the test, there would be a problem. They would all do so well on the repeated (equating) questions that determining the difficulty of the test would be very hard. The test makers would realize that a bunch of sophisticated math people had taken the test, but the distribution of their equating scores would be so skewed as to be useless. The great advantage of the SAT from a test maker’s point of view is that many thousands or hundreds of thousands of people from a reasonably stable “cohort” (i.e., high school students of a narrow age range) take the test in any given month, with a tremendous variety of study habits, backgrounds, etc. Statistics is primarily useful and powerful when the numbers become large.</p>
<p>Anyone interested in this stuff should read [this</a> College Board white paper on how SAT scoring is done](<a href=“College Board - SAT, AP, College Search and Admission Tools”>College Board - SAT, AP, College Search and Admission Tools) as opposed to reading just what I have to say :)</p>