<p>Rather than attribute test performance differences to K-12 education I would conjecture that test exposure has much to do with it. Here are the top 10 cut-off states, their 2010 sophomore participation counts, and their 2010 junior participation counts:</p>
<p>4,748 4,686 DC 223
37,871 52,043 MA 223
55,458 70,535 NJ 223
172,477 178,084 CA 221
60,578 44,908 MD 221
26,839 33,715 CT 220
60,875 54,159 VA 220
19,818 33,356 WA 220
112,973 154,675 NY 219
243,028 205,659 TX 219</p>
<p>Totals
794,665 831,820 96% soph./jr. ratio</p>
<p>Although this is only a proxy for a true count of repeated test exposure, it is pretty clear that sophomore testing is quite commonplace in these states. (Indeed in four of them it is more common than testing juniors!) For comparison here are the 10 midwestern states in which SAT participation is less than 50% (i.e., not MI, not IN).</p>
<p>15,860 42,485 IL 216
5,095 22,931 MN 215
6,114 10,502 KS 214
29,561 51,275 OH 214
8,282 14,068 MO 213
2,712 8,466 IA 210
2,078 6,260 NE 209
3,632 20,098 WI 209
553 2,556 SD 206
409 1,851 ND 204</p>
<p>Totals
74,296 180,492 41% soph./jr. ratio</p>
<p>You can see that in all of these states sophomore exposure is much less frequent than in the high cut-off states. (In fact the PLAN test, the ACT PSAT-analog, is quite popular for testing sophomores in many of these states.) Since prior exposure to the test is a demonstrably important factor in raising test scores I suggest some difference in performance arises from this difference alone.</p>