<p>Our ACT (I think it was the ACT, S took both) was administered at our local community college which is most definitely open in summer. Don’t see why the SAT couldn’t use colleges in summer.</p>
<p>Aren’t students’ scores measure against other test takers from that sitting? It seems like it would be difficult to score well at this particular sitting given that it’s such a small group of smart and motivated students taking the exam.</p>
<p>You know it is funny, I emailed the collegeboard a year ago to ask why they put the May SAT in the middle of AP testing. Can’t they move it to a better weekend. They responded that they realized this presented a conflict but they weren’t sure how best to move it. </p>
<p>I guess they were more willing to put their energy into figuring out how to do a summer SAT for 50 students then moving the SAT to benefit 100’s of students.</p>
<p>That’s a really good point Lenny, how are the going to normalize the test if so few students are taking it an that one time?</p>
<p>If they were smart they could have added a URM angle to this “pilot” program. All would be forgiven – lauded, even.</p>
<p>I don’t fully understand how the normalization works, but the way I understand it, the normalization is done through the experimental section that is hidden in one of the sections in the test. In that section, there are problems that have appeared before in the past and the result of how well people do in that section is compared with the past and used for centering the curve for the score of the test.</p>
<p>So the students in the sitting are not compared against each other, but they are compared against past result. And depending on how the centering works, it seems if a small group of students do really well, that could be more beneficial for their scores by the following reason. The good score by the small group of students signifies that the group is above average compared to the past population, therefore, their score should be scaled up to reflect that they are superior group compared to the past. Again, this is a conjecture, the actual methodology of how to CB do this is probably proprietary.</p>
<p>The point is CB can do whatever they want with the score, and when there is this special case with small sample size, they can easily adjust to whatever they see fit. And to announce ahead of time that they are going to report this as normal June score, then this even suggests a more sinister cloud over the whole process.</p>
<p>
</p>
<p>No, and they never have been. Questions are recycled, so CB knows with absolute certainty, which question is easy, medium and hard. CB also uses the ‘experimental section’ to test out new questions across millions of unwilling guinea pigs.</p>
<p>In essence, all 50 kids could score 2300+, or 1550.</p>
<p>
</p>
<p>Scaling doesn’t work that way. It is based on the number of easy/medium/hard questions. On a typical test, say, miss one on math and you earn a 770. Miss two and score a 740. But if a test on a different date has one more more difficult problem (and one less easy problem), missing two might earn a score of 750 (bcos that test administration was considered more difficult).</p>
<p>According to this link, I am under the impression that the experimental section is the old recycled questions and not the other way around. And the experimental section is the link between current and past students such that the score can be normalized and mean approximately the same thing over time.</p>
<p><a href=“http://professionals.collegeboard.com/profdownload/pdf/rn14_11427.pdf[/url]”>http://professionals.collegeboard.com/profdownload/pdf/rn14_11427.pdf</a></p>
<p>it’s actually both. They add brand new questions into the experimental section to test them out, particularly for gender and racial impact. But they obviously know which is which. lol</p>
<p>“On a typical test, say, miss one on math and you earn a 770. Miss two and score a 740. But if a test on a different date has one more more difficult problem (and one less easy problem), missing two might earn a score of 750 (bcos that test administration was considered more difficult).”</p>
<p>That’s what’s bugging me (well, part of what’s bugging me). By saying this test was the June administration, colleges think these August/august kids took the same test and are being measured against a group to which they should not be compared. Unless they’re giving them the real June test – now that would be worth that $4,500!!!</p>
<p>
</p>
<p>Read the document in the link again, it certainly suggests that there are many different models that come into play and one of them is what I suggested, and another maybe what you suggested. It seems certain model and scaling could be effected by sample that is skewed in some ways. Below is a quote, the first one (relative difficulty) could be your method and the latter (differences in ability) is what I said and described mostly in the paper.</p>
<p>“Equating formula scores on the new and previous form involves an evalu-
ation of the relative difficulty of the two forms after adjusting for differences in ability of the samples that took the previous form and the new form.”</p>
<p>This really is just a drop in the bucket in the already massive list of advantages that the wealthy enjoy when it comes to college admissions. But of course, no one can really stand up to College Board because … well, it’s College Board, and their products are near-indispensable in the admissions process. It’s awfully depressing.</p>
<p>This is just mind-boggling corruption. There are kids in New Jersey who go to SAT school for 40 hours a week for 7 weeks or so in the summer. This is worse.</p>
<p>A lot worse. You pay me a lot of money to help you improve your score and then I turn around and give you a special test that I mysteriously mark as a usual test that seem to have been taken when most people did. If it is on the up and up, why hiding it and make it seems like a June test. And what happen if the kids took June tests? There would be 2 SAT scores in June. Are they going to let them do scorechoice across 2 different testing sessions such that the composite scores can seem to be from one sitting?</p>
<p>The trend in admissions seems to be away from sat scores, most notably at the U California where the subject tests are largely optional. Perhaps CB sees their future in the profitable test prep business, where they can prey on the insecurities and fat wallets of the wealthy.</p>
<p>The only solution would be for colleges to disallow ANY June 2012 test results . . . which would be horribly unfair for those kids who actually take the test tomorrow, but would teach the College Board a much-needed lesson.</p>
<p>Or the colleges could stop accepting SAT scores altogether . . . .</p>
<p>Uh oh, sorry, slipped into fantasy land there for a moment.</p>
<p>I kind of agree with jvtDad. Some of the kids I know didn’t even prep and scored very high(almost perfect).</p>
<p>Just to clarify, for those who just joined the thread, the issue is not just that these kids get the advantage of a $4,500 prep course, but that they get to take the test during the summer, with no distractions, after having been able to focus solely on test prep for the days and weeks preceding the test. </p>
<p>Even without the prep course, this distraction-free environment is is a huge advantage that is never available for the kids who have to take the test during the school year, when it often coincides with finals, and always coincides with homework and other commitments, both in and out of school. What kid wouldn’t benefit from having nothing to do for three weeks other than prepare for the test?</p>
<p>And, since it’s a residential program, there are no parents to pressure you, no siblings to bother you . . . sounds like a pretty ideal environment for a would-be test taker!</p>
<p>And don’t worry about having to get up early the day of the test to drive across the county or take a subway across town - the test center is right outside your door.</p>
<p>
</p>
<p>It would be nice if standardized tests were not required, but that would require more consistency in curriculum and grading policies across different high schools, like in Canada. Unfortunately, there is no good solution without that, since standardized tests intended to give a common measure are not always that well designed or implemented (as this news shows), nor free of quirks that can be gamed (by test-specific preparation) but are not of much value in predicting college performance.</p>
<p>I’m not worried. Nothing stopping any motivated kid from getting a few test prep books and studying 40 hours a week for 3 weeks, or 7 weeks or more. Silly to waste $4500 on what could cost you $40. The fact that a few rich kids have this advantage isn’t going to hurt my kid a bit.</p>
<p>I think the January test date, after a 2-3 week winter break for most, is a good option.
(First semester ends in Dec. here.)</p>
<p>
</p>
<p>Well, until then, we have the ACT. If only more people realized the SAT isn’t the only game in town.</p>