<p>I'm only answering this so that others later do not misconstrue my original post. I was not the one who presented the 2400/1700 comparison. My statement referenced a smaller "price point" range.</p>
<p>Epiphany,</p>
<p>We already have a test optional environment, from the colleges point of view. Most colleges choose to require all applicants to provide test scores for all the reasons mentioned earlier. Are you suggesting that we should compel all colleges to allow students to apply without submiting test scores? Are you including SAT II's, ACT's, and AP scores also? What is wrong with letting the colleges decide for themselves on this matter? You seem to have enormous faith in adcoms on most issues, why not on this one? Some colleges have chosen not to require test scores but they do so in an environment in which almost everyone else is taking them which helps it work for the colleges that make it optional. I fail to see what is wrong with the current system other than that you don't like it.</p>
<p>What I find interesting is the passion with which the GPA boosters attack the importance of test scores, while the opposing view offers little but the occasional observation that some schools have grade inflation. The claim that "In no way, shape, or form is it fair to attribute A's in challenging electives or content-rich AP classes to grade inflation" is a complete non sequitur. If all students in a given school's class get A's, but the students consistently do poorly on nationally-standardized tests of the same subject - you have grade inflation, "content rich" or not.</p>
<p>Grades and test scores are both imperfect but still valuable predictors of collegiate "success." The 2001 UC study defined "success" as academic achievement in the students' freshman year of college - a pretty thin basis for its conclusions, in my opinion. The two measures tend to measure different types of "success." Grades tend to respond to diligence coupled with native intellectual ability, while tests skew more towards a broad basic information base and quick decision making skills. A "genius" may score well or poorly on either measure, depending on the nature of his or her "genius." That's why a broad array of data points - grades, test scores, extra-curricular activities, personal assessment of qualified individuals with knowledge of the student's academic achievements - that is, the content of the usual college application - is the most rational approach. All of the data points have significance.</p>
<p>I agree completely with Kluge, on multiple data points.</p>
<p>I just quibble (for fun, I guess) with those who contend that the SAT is a worthless test that doesn't reveal a thing about the test taker.</p>
<p>If the scores are less than one standard deviation apart, there may not be much difference in 'smartness'. However, when you go to 1.5 ,2, or 3 standard deviations apart, the difference is absolutely there and is visible.</p>
<p>"If the scores are less than one standard deviation apart, there may not be much difference in 'smartness'. "</p>
<p>Yes, but that's not what you typically hear from the SAT fans. One standard deviation is approximately 100 points on each subtest. I don't hear many proponents (who inevitably have scores they are more than happy to tell you about) saying that a 2100/2400 or a 1400/1600 are comparable. </p>
<p>Shoot, I read about people obsessing over 50-100 point differentials on the TOTAL score all the time.</p>
<p>I agree with Kluge's response.</p>
<p>To use an analogy, I think the SATI and SATII is similar to the NFL Combine tests that they give potential draftees. Anyone can train for it and get better, but people have a ceiling. Also, if you've been training heavily for 4 years in a good program that is obviously a big advantage. As for native athletic ability, well there are some "workout wonders" that aren't that great on the football field. There was a Green Bay lineman that was supposed to become the best of all-time based on his combine stats and he ended up being a bust. Other people are tear up the NFL combine and are able to translate it to the football field (e.g., Brian Urlacher.) Jerry Rice didn't have an impressive 40 yd. time, but he became the greatest receiver of all-time. He did have slight limitations, but he was perfect at what mattered most (although Bob "bullet" Hayes, might be able to stretch defenses better with his olympic gold-medalist speed.) </p>
<p>There are plenty of things that aren't measured by the NFL combine. A running back's feel for hitting the holes is just one example. Similarly, a sense of which direction to pursue in scientific investigation (one aspect of creativity) is obviously not measured by SATIIs. However, it's hard to be creative in the future if you don't know the current rules extremely well (750+ on the SATII's.) </p>
<p>It's obvious why kids in bad high schools may not do well on the SAT despite abundant intelligence, so I won't explain it again. In summary, I would say the SAT's do say something, but not everything. Some people hate the SATI because it's so artificial, but I personally looked at it as just another kind of mental challenge.</p>
<p>There is a difference between recognizing the value of a national standardized test as an important element of an application package, and trying to ascribe specific qualities such as student's intelligence, aptitude, or achievement to the test itself.</p>
<p>In fact, it is such faulty "attributes" thave forced the College Board to change the name of the SAT from Scholastic Achievement Test to Scholastic Aptitude Test to Scholastic Assessment Test (with the beautiful redudancy) to finally ... nothing, as the College Board was forced to announce, "Please note that SAT is not an initialism. It does not stand for anything."</p>
<p>So, we do now know that the SAT does not measure Scholastic Achievement , Scholastic Aptitude, or Scholastic Assessment. Do we really need to push the envelope further for the The College Board to clarify that the SAT does not measure intelligence? After all they already ran out of plausible terms for the letter A! </p>
<p>PS Of course, they now have an entire alphabet at their disposal since the official name of the "beast" is the SAT Reasoning Test. Ironically, that is best name so far.</p>
<p>collegealum:</p>
<p>your analogy would have worked better if you used the NFL's own test for ?, the Wonderlic (sp?). Not sure what it measures, but kids from top academic schools have scored highly. Of course, the NFL don't care much about the wonderlic score if a drunk can kick a 50 yard field goal consistently.</p>
<p>QUOTE:
//The claim that "In no way, shape, or form is it fair to attribute A's in challenging electives or content-rich AP classes to grade inflation" is a complete non sequitur. If all students in a given school's class get A's, but the students consistently do poorly on nationally-standardized tests of the same subject - you have grade inflation, "content rich" or not.//</p>
<p>Whoa.</p>
<p>No, I'm sorry: YOURS is the non-sequitor. My statement (about the quality of high schools) was limited to grades, not to SAT's. The earlier argument, the old tired one, is that the SAT equalizes grades -- between on the one hand, unchallenging courses and/or easy graders (where half the senior class has a 4.0 GPA or there are 24 Vals), and on the other hand courses so difficult and advanced that many freshman at elite U's would be challenged by them. The SAT does not "correct" for GPA. The SAT is a DIFFERENT measure, unrelated to the GPA, unrelated to the quality of the highschool classroom achievement. The SAT is not an achievement test. That's why the strict numbers game doesn't work. It is neither meaningful to use a GPA as a stand-alone nor is it meaningful to use SAT as a stand-alone, nor is it meaningful to judge one by the other. It may be meaningful to look at lots of standardized test scores and compare them (for one student), as it might be to look at lots of transcripts if available (community college, highschool-level courses taken in a pre-college program, + all campus high school courses) and compare them for one student.</p>
<p>But to say that 2 equal SAT scores, for example, and 2 divergent GPA's (of the same 2 students) shows that either one school had grade inflation or one of the 2 students is underperforming is information that is not available from pairing a GPA at random school with an SAT of the same student.</p>
<p>Such a conclusion shows a failure to understand the basic data (i.e., the basis for comparison).</p>
<p>Nor did I ever say or imply anything about an entire class or school having poor SAT scores PLUS <4.0's as somehow signifying a rigorous curriculum. I said no such thing. College admissions committees do not select entire groups of students by the schools they attend (unless, yes, there is a longstanding history of some feedership going on, such as with some East Coast LAC's). Each student is looked at individually, and should be. It would be highly unlikely that at a truly demanding private school, everybody would have the same narrow range of unimpressive scores. Not going to happen. But it happens all the time that many seniors from easier publics have 4.0's, while also having somewhere between good to great scores. That doesn't say anything about the quality of those courses. A student from a difficult school with the SAME SAT score may in fact be a much superior student, judging from the performance in classes far more intellectually demanding. It's not the SAT that's "correcting." It's using the available data about the content, standards, etc. of the courses that is "correcting."</p>
<p>Epiphany,</p>
<p>Given the general coorelation of grades and IQ's with SAT's both I & II, it is extremely reasonable for an adcom who is unfamiliar with a particular HS to use the SAT scores (both the average and range) to get a sense of the quality of the competition. If he had data on the grades of students at the school and their SAT scores he would have some ability to place the students 3.0 or 4.0 GPA in context. The fact that you continue to assert that this is untrue shows only how deeply you have your head dug into the sand. This is an inexact process and denying adcoms access to valuable test data only makes the process of triangulation harder.</p>
<p>Perhaps I missed this, but what about high schools who have given up ranking? No matter how "good" or "bad" the hs, you could relatively tell who the best students were (various ranking systems aside). If you abandon the SAT and have no class ranking, how are colleges supposed to evaluate applicants? </p>
<p>I'm not a big fan of the SAT and I really think our high school students are over-tested. But there really must be something to use as a quantitative comparison because colleges are not going to examine the personal lives and intellectual curiosity of each applicant.</p>
<p>Ikf,</p>
<p>Your right but even HS that don't rank provide enough information to determine where the student stands at the school.</p>
<p>I think that there are dangers in talking about the SAT in the abstract, without taking into account the very different ways it's used in schools with widely different admissions pools. Many, perhaps even most top schools have studied what not requiring the SAT would do to their admissions process, to the quality and make-up of the applicant pool and admitted class, how it would affect their prestige (there's definitely a cache to requring the SAT or even better SAT plus SATIIs) and rankings.</p>
<p>It's interesting how uniformly things have shaken out: most of the highly selective schools--who have to draw fine distinctions between lots of very qualified applicants--still seem to feel that the benefits of requiring the test outweigh the burdens of requiring it. And I think lots of these schools take seriously the downside of the testing culture. </p>
<p>On the other hand, it's getting hard to find LACs below about #25 in the U. S. News list that still require the SAT, particularly on the East Coast; yet it's worth noting that institutional studies at Lafayette made them abandon an SAT-optional policy, while Mount Holyoke and Bates have run similar studies only to find that they can live without it. </p>
<p>And, of course, large, numbers-driven schools will continue to use the test because of the economies it brings. Anyway, it's not like colleges haven't thought about the problems mentioned in this thread: it's just that the answers vary according to institutional context.</p>
<p>It's the fairest measure available (along with the ACT). The three sections (critical reading, math and now writing) certainly are "coachable" in that any reasonably intelligent students can do quite well on them by merely being conscientious and attentive students throughout their academic career. Parents of ANY income can help their kids do extremely well on the SAT or ACT simply by TAKING THE KIDS to THE LIBRARY!!! Encourage reading. No one needs to be rich in this society to provide reading material to their children. Oh yes, and parents can also help their kids do well on the SAT and ACT by backing off a bit on the sports and emphasizing scholarhip more. It's pathetic how many parents think their kids are going to college on their soccer instead of from the neck up. Try turning off the ESPN and handing your kid a book.</p>
<p>
[quote]
No one needs to be rich in this society to provide reading material to their children. Oh yes, and parents can also help their kids do well on the SAT and ACT by backing off a bit on the sports and emphasizing scholarhip more.
[/quote]
Very true, mammall.</p>
<p>I am not a big proponent of the SAT1 but it does provide one additional item of data for adcoms to consider. Mr Murray seems to suggest that SAT subject tests would be as valid a way to predict college performance. That is probably true but most would rather take 1 standardized test rather than three.</p>
<p>I know our son scored in the upper 90's %ile in the too many standardized test he took so it was no surpise when he did so on his PSAT/SAT1 tests.</p>
<p>I would not have a problem with more student choice in this matter. However I do not see any advantage with substituting 3 SAT2's for the SAT1 test, unless it is just a scheme to enrich ETS a bit more.</p>
<p>And regarding Mr Murray's problem with test prep companies, does he think that they wouldn't morph into SAT2 test prep companies within weeks of a change?</p>
<p>I agree with all that was said so far by Allmusic.
Take it once. Prep all you want, but take it once.
Someone brought up an example of a driving test. However, you can't fail SAT the way you can fail a driving test. There is no passing score.
And as far as SAT showing if someone's intelligence is superior than other's ? If you take it unprepped of course it does. Are there kids who take it unprepped? I believe that yes, there are.</p>
<p>"Unprepped" is sort of a meaningless concept here. Someone who reads a lot is more prepped than someone who does not. Someone who does a lot of math or writes a lot of short essays for school is more prepped. Given deminishing returns the current attitude of letting folks take it as often as they want and super scoreing seems like fairest approach. If you let three kids take multiple IQ tests (varying the questions) the results are not valid for stating what the students IQ's are, but they are valid for comparing the three students.</p>
<p>Here's what I've seen in our hs (from both naviance scattergrams and just knowing the students)...</p>
<p>Within a certain SAT range (1300 to 1600; or 2000+), there doesn't seem to be a correlation between admittances and relative scores. I've seen kids with high 1300's get into ivy's that kids with 1500+ didn't. And that's comparing apples to apples- similar GPAs, strength of curriculum, etc. It had more to do with the other stuff in the application package, and I'm not talking about URMs or development cases.</p>