<p>Considering the sheer volume of SAT essays to be evaluated, the need to standardize scoring, and the need to avoid charges of cultural and other discriminatory biases, the human graders are provided what some may call a restrictive grading rubric. </p>
<p>Computer analysis for SAT essays isn’t quite there yet either. Many articles written for the mass media and higher-ed journals have mentioned the human and computer scoring rubric tends to strongly favor those who can write longer essays, no matter how illogical, incoherent, etc the essays may be. Those who can easily fluff their essays up to increase length quickly would have a substantial advantage over those who can’t.</p>
<p>Amen. Moreover, most normal people with normal social skills are going to want to use a GPA simply as a quick filter and evaluate … here’s a concept … the actual PEOPLE in front of them, whether that is through a phone call, Skype or an in-person interview. There are characteristics of people that you simply don’t get when you reduce them all to grades, the way that Beliavsky prefers to. Beliavsky, it’s almost as though you don’t realize that simply being book-smart isn’t remotely the key to success in the workplace.</p>
<p>But you keep acting as though it matters – as if students with a 3.8 are appreciably better / smarter / more worthwhile than students with a 3.75 or whatever. Do you actually ever interact with humans in your job? Because it seems like you’re trying to find whatever heuristics you can to avoid actually taking into account dealing with humans. I don’t mean this disrespectfully, but this sounds like the stereotype of how someone who is far on the spectrum would propose “selecting” people for something.</p>
<p>This is highly dependent on the individual student and the academic rigor/level of the high school attended. For instance, I’ve known many undergrad classmates at my LAC who were straight-A students who maxed out on AP/IBs and then subsequently floundered to the point of an academic suspension or crawled to graduation with 2.x undergrad cumulative GPAs. </p>
<p>On the flipside, I’ve also known plenty of HS classmates who were C/D students at my urban public magnet who found undergrad…even at some elite colleges they transferred into like Columbia to be comparatively easier than HS…and it was reflected in their higher GPAs mostly in STEM fields. </p>
<p>Moreover, while hard work is essential, it alone isn’t enough to determine success in college academics or life. After all, other factors such as whether that hard work is applied appropriately and intelligently will also factor into whether a given enterprise is successful or not. </p>
<p>While “smart but lazy” types aren’t likely to succeed in the real world without much charity and luck, the same could be said for hard workers who are mindless about application of their work ethic to the detriment of the goal he/she and his/her employers are trying to achieve.</p>
<p>Okidoki, should we run with the belief of Beliavsky, and assume he is actually correct about the manner in which the SAT essays are graded? Is there anyone here who can conclusively enlighten this forum that the company that is contracted by ETS/TCB actually relies on computerized SCORING? What is known is that they are a small army of readers – with perhaps dubious qualifications as they are mostly culled from the HS ranks. </p>
<p>People intersted in this technology might be well-served to google the name Les Perelman (yes, of MIT, Mrs. QM) who has a long history of exposing the flaws in the Pearson/ETS methods. He is particularly adamant about how easy it is to fool the e-rater and … the boxed-in readers. </p>
<p>Not to fuel the debate about automated transcript reading and analysis, but I am afraid that many miss the more important part: there are plenty of r</p>
<p>When I’ve reviewed resumes for a job opening in my office, it’s never been necessary to look at grades in order to winnow down the applications to a reasonable number–I can do that by looking at the resume to see what the person has been up to in a previous job. I suppose it would be a bit harder for people coming right out of college.</p>
<p>The raw correlation of 0.34 between the SAT-W and FYGPA is higher than the correlations of the other sections with FYGPA and is only slightly lower than the correlation of high school grades with FYGPA.</p>
<p>What has the predictability of the SAT to do with may have to do with my post that addressed the supposed Mechanical Turk that works for ETS? A non sequitur, if I have saw one. </p>
<p>And, fwiw, how many discussions do we need about this latest twist you brought up. It is a matter of which study you want to follow. ETS/TCB had an arsenal of studies at his disposition when it prepared itself to battle the UC idiots led by Atkinson and the hired guns “scientists”. They never had to use those megastudies as the UC, in its infinite wisdom, offered a compromise that was a Pyrrhic victory for the UC, and a giant loss for generations of tests takers with the addition of the quasi useless writing test and essay. </p>
<p>Yes, you can thank the geniuses who think it is wise to attack the SAT, instead of accepting the fact that it is a part of the puzzle, and that it does not replace GPAs and rankings but validates them to create a SUPERIOR admission yardstick. </p>
<p>Bottom line? Pick whatever element such as GPA and add the SAT to it, and you have a better base to judge a candidate. Pick any of them in a vacuum, and you have a poorer base.</p>
<p>Ahem, xiggi, I prefer to be called QM, or Professor QM, or Dr. QM, rather than Mrs. QM. I am “Mrs. QM” (and happily so) to QMP’s elementary school and middle-school teachers. Actually, I would really prefer to be called Nobel Laureate QM but there aren’t any signs of that yet!</p>
<p>Duly noted, Dr. QM. I have to watch my use of CC shortcuts, and refrain to be too cute for my own good. Should have learned after irritating Yolochka. </p>
<p>Anyhow, I thought I was interesting to direct the attention to Les Perelman in a discussion about STEM majors. </p>
<p>PS Please note that I was making references to the discussions about USAMO, MIT, and STEM in general, and not singling out your posts in those discussions.</p>
<p>PPS Come to think about it, here is an equation worth solving</p>
<p>I really don’t quite get what you’re driving at here. You seem to be suggesting that, for fear of litigation, I, and others in my position, should not write candid evaluations of our students and former students when the students have specifically requested that we write letters of recommendation for them, or when they ask that we make ourselves available as references if a potential employer calls, based on the student’s listing us as references. So we should just clam up? Or just say, “Oh, of course, they’re all great”? Or maybe you’re just referring to the situation at Brown, where professors write narrative evaluations that go into the student’s transcript if, and only if, the student requests it. So the professors asked to write such evaluations should just write meaningless pablum, for fear of offending someone who didn’t get such a strong endorsement?</p>
<p>Whose interest, exactly, would be served by cutting off those pipelines of information? </p>
<p>Or are you suggesting that even grades should be distributed on an egalitarian rather than a meritocratic basis, lest someone file a disparate impact lawsuit?</p>
<p>Love that Perelman essay! As a writing instructor, I am a huge fan of Perelman.</p>
<p>I think I am going to show this to my next year’s new freshman comp classes–they get so demoralized when they learn how the way they learned to write in high school for those high-stakes tests will need to be unlearned in college.</p>