<p>I just watched Adam Robinson being interviewed on CNN re: the SAT.
He said that the belief is that SAT is 54% accurate in predicting how successful students will be in their first year of college (fairtest says that number is really 20-22%) - and only first year. SAT is not a predictor of how students will perform in years 2,3,4. </p>
<p>He admitted that HS grades predict at 50%. Thus, he says the SAT is only marginally better than grades. </p>
<p>He also cautions to skip the March SAT for 2005, as he thinks it has bugs that have yet to be worked out. </p>
<p>His tip for success: prepare for 2-3 months before the test (no surprise he would say this ;) )</p>
<p>is this the same Adam Robinson who co-authors books for PR? If so, his CNN recommendation to skip the March test is exactly opposite of what our local PR rep has been saying publicly.</p>
<p>Blue, that's him! Yes, he definitely said to skip March and take in June. Ask your local rep why his advice differs from Robinson's CNN interview comments.</p>
<p>So SAT is marginally better than grades, and also marginally better than a coin flip? That would certainly simplify the college admissions process (going to the coin flip)...... "Heads or tails? Heads it is... do you elect to attend or decline?"</p>
<p>All the CollegeBoard folks claim for the test is an association with first-year college performance, so let's not be too quick to blame them. They are, after all, just a business. Problem was that the UCs, after 5 years of study, found that it didn't even do that, especially for non-white students.</p>
<p>But, as all colleges and universities know (and which the CollegeBoard itself has closely studied), there is a clear association between SAT scores and 1) family income in the area surrounding the school (or the student boy at private schools) (and note, NOT necessarily with one's own family income), and 2) highest educational level of one parent. So, if you want to select on that basis, it is MUCH better than a coin flip.</p>
<p>I am very conflicted about the SAT test. On the one hand I see value in it for both the student and the college. Score ranges published by the colleges give students some idea as to how they might compare academically with the overall student body. For colleges, it gives them a uniform, though imperfect, measure of applicant abilities. Considering test scores in combination with gpa, class rank, and school profile gives them a somewhat more complete picture of the student.</p>
<p>My problem is with the extreme measures some students take to get an optimum score. There are measures the colleges could take to reign this in by taking into account when students have taken the test more than two times. If I were an adcom and were looking at two students with an SAT score of 1400, I would give the nod to the student who got that score in one sitting versus the one who got it after 3 sittings.</p>
<p>Lets take a student who has taken the SAT test 4 times, raising it from 1320 to 1410, taken on a schedule of 6 AP's senior year, has a smogasboard resume a mile long and has applied to 12 colleges. Compare that with a student who has taken the SAT once and gotten 1410, taken 3 AP courses, Calc BC, CompSci AB and Physics C., has a compact resume with a strong focus on swimming with some accomplishment demonstrated, all county selection or jr olympics for instance, and has applied to 5 colleges. I would far rather admit the swimmer to my university because she seems like a balanced and interesting person, one who is able to identify her strengths and desires and willing to work developing them. To me the other student seems somewhat obsessive, willing to do whatever it takes to get what he wants. He seems emotionally unhealthy and immature. At the willingness to do whatever it takes can lead some people into ethically grey territory.</p>
<p>So the SAT is one tool that should be used in the college admissions process. And I submit that more than just the score can tell the adcoms something about the student.</p>
<p>Charles, I think that the schools have sent a pretty powerful message that they WANT to see the highest scores possible and REPORT the highest SAT scores to the press. </p>
<p>If the schools truly wanted to limit the number of trials, they could use a panoply of devices: averaging scores, discounting the third or fourth by 10%, and numerous other factors. The reality, is that except for large state systems such as California -by far the most misguided system when it comes to standardized tests- or Texas, the vast majority of schools openly encourage multiple sittings and use the best individual scores. </p>
<p>There is no doubt in my mind that a student who reaches 1500 on his fourth trial is in a much better position than a student who earned a 1400 on his first. No matter how much "discounting" the school may apply to a multiple score, they will NEVER round up a score because it was a result of a single sitting. The reality is that it is pretty apparent that the adcoms rely on technical aides to transfer most of the numerical data and produce a summary of the GPA, SAT, and other scores. Chances are that adcoms never see the numbers of trials, only the result of the formula used by the college. Does anyone really believe that the HYPS of this world recalculate the finer details of tens of thousand of transcripts and scores? </p>
<p>While I do think that the number of trials may provide some tools to differeniate two equal candidates, this would mean nothing for candidates who are excluded at the early stages. </p>
<p>Some of us might still believe that the review of files is truly holistic and that college carefully assess each file. I do believe that this work does take place but only in the secondary rounds -after a large number of the candidates have been rejected based on preliminary readings. It is for that reason that I strongly believe that, for unflagges candidates, the rank, absolute GPA matters, hightest SAT scores matter tremendously -and in that order.</p>
<p>The reason one might skip the March test is to avoid being a guinea pig. Two months before its unveiling, there is only one commonality: rampant speculation about the level of difficulty of the test. The publishers have ofefred books that are mere pokes in the dark. Based on my own review of the preliminary tests released by The College Board, I do not see how TCB will achieve its objective of maintaining the longitidunal integrity of the test. In other words, I do not see how the average score will remain at around 520 in math and verbal. The new tests are supposed to have less than 10% of new material, but that is not the case in the current tests leaked by TCB. </p>
<p>March 2005 will be a hit or miss. TCB might decide to make it super easy or super difficult. One thing is, however, certain: short of ETS/TCB nobody has a clue about the contents of the test. The fear of the unknown is a powerful motivator to skip the "premiere".</p>
<p>I'm not sure where the OP's figures came from, but someone is either misreading or misrepresenting the CB's own research. I have before me two different studies, one published in 2000, and one published in 2001.</p>
<p>The 2000 CB study compared the recentered SAT to the nonrecentered SAT, and both to HS GPA for prediction of 1st year college grades. Results: (m=male, f=female) HSGPA .38 M, .34 F. SAT .33 M, .35 F. Both, .44M, .43F. So the gain by using the SAT over HSGPA is pretty minimal - 0.06 for males, 0.09 for females.</p>
<p>The other study was a metastudy, and found a weighted average correlation of .36 for the SAT, .42 for HSGPA and .52 for both.</p>
<p>The correlation data is not predictive but merely represents the relationship between the variables in question. The difficulty is the outcome measure-1st year college grades. No one would make the assumption that a grade earned at a local community college is equivalent to a grade earned at MIT, yet in the large correlational studies of SAT scores, HSGPA and college grades, that assumption is made and so the results are quite suspect. SAT scores certainly provide a standardized measure for the colleges to compare students. HSGPA doesn't, class rank doesn't and even AP scores (due to teaching differences) do not allow reliable comparisons. While one can certainly argue that there is no predicitive difference between a 1400 and a 1500 on 1st year college performance at a given college, colleges use SAT scores because they are a useful sorting device early on in the selection process.</p>
<p>mol10e, have you actually read any of the studies on this issue, or are you engaging in flights of speculation? I suspect the latter, as your comments above have no relationship to the studies done.</p>
<p>Now for a bit of a stats lesson. In your first sentence, you should have referred to "correlation cooeficient", not "correlation data" unless you were referring to the collection of correlation cooeficients in one of the studies, in which case the term "data" could be somewhat appropriate. Further, while it is true that correlation between two data sets is not causative, it IS predictive. That's the whole purpose behind some statistical exercises. </p>
<p>mol, I'm sorry you don't find the results of the CB's own research consistent with your world view. That does not mean you can dismiss the results so easily,</p>