The Perils of Standardized Testing: 6 Ways it Harms Learning

<p>The</a> Perils of Standardized Testing: 6 Ways It Harms Learning : InformED</p>

<p>Some interesting stuff there. Begins with Google's hiring team determining that GPA and test scores have had almost no ability to predict success there.</p>

<p>I get the aversion to standardized testing, but the SAT and other reasoning skills tests (GMAT, GRE post definitions) are pretty much perfected as far as identifying top ability. Whether or not those people are smart enough to apply this ability in the real world is a different metric entirely and is the point of the in person interviews.</p>

<p>On the other hand, No Child Left Behind/EOGs is just a disaster from everything I’ve seen.</p>

<p>The article I think misrepresents what Google is saying. If not, their HR guy is an idiot. </p>

<p>For example from the article </p>

<br>

<br>

<p>Uh, what? Google is notorious for these type of questions. Knowing the answer isn’t the point, its an analytical question set up to show how the applicant thinks. The right answer is meaningless and it questions whether or not this guy is in HR at all or just some guy in a ‘staffing’ function that doesn’t have anything to do with interviewing.</p>

<br>

<br>

<p>Again, are you serious? All of these employees are from different vintages for college degrees. Grade inflation has been rampant esp the last 15 years so comparing direct GPAs is meaningless. Older people are going to have lower GPAs and generally be more successful. I can’t see how they’d get any meaningful results from this.</p>

<p>The article quotes the google guy saying that test scores are worthless, in addition to GPAs. </p>

<p>Also Google is not alone in questioning the value of SATs, even for their intended purpose, predicting college success: [Do</a> SAT Scores Really Predict Success? - ABC News](<a href=“Do SAT Scores Really Predict Success? - ABC News”>Do SAT Scores Really Predict Success? - ABC News) - and I didn’t know this:</p>

<p>

</p>

<p>[Time</a> to retire the SAT - Class Struggle - The Washington Post](<a href=“http://www.washingtonpost.com/blogs/class-struggle/post/time-to-retire-the-sat/2012/09/27/48d9c64a-08b8-11e2-afff-d6c7f20a83bf_blog.html]Time”>http://www.washingtonpost.com/blogs/class-struggle/post/time-to-retire-the-sat/2012/09/27/48d9c64a-08b8-11e2-afff-d6c7f20a83bf_blog.html)

</p>

<p>[SAT</a> I: A Faulty Instrument For Predicting College Success | FairTest](<a href=“http://www.fairtest.org/sat-i-faulty-instrument-predicting-college-success]SAT”>SAT I: A Faulty Instrument For Predicting College Success - Fairtest)

</p>

<p>

</p>

<p>I got asked these types of questions when interviewing for a competitive internship, which I got, so apparently I was successful at answering them. Most of my answers were complete BS though, so I can see how these types of “brainteasers” might not be so useful.</p>

<p>The SATs are for competitive college admissions and really shouldn’t be used beyond that. For one, the ambition and passion for learning and development of study skills may not occur until after one has gotten into college. The SAT may fail to measure that students level of education and raw ability as an apathetic high school student but completely miss the potential of that student when he applies himself.</p>

<p>Secondly, the lack of ability for companies to fire people for just not measuring up to a desired level is forcing companies to look at all these other measures of ability. I see this in lawsuits all the time. People can’t be fired like they used to so companies are much more risk averse - it’s better not to hire 9 good employees to avoid hiring 1 bad employee so companies are in search of perfect employees (people who are perfect before hiring them). Rarely, will a company “take a chance” with someone to see how they’ll do. It can be too costly.</p>

<p>I think the element that the HR person is missing is: Presumably all of the people in the sample were actually hired by Google, so that their performance on the job could be determined. If Google gives a boost in hiring probability to people with high GPAs and/or test scores, the people they actually hired with lower GPAs and/or test scores somehow compensated for that <em>at the stage of consideration for hiring.</em> The compensating elements turned out to be important. That’s not to say that the compensating elements are the only qualifications needed. </p>

<p>If Google randomly hired a set of people without regard to test scores or GPAs at all, and then looked at their performance on the job, I imagine that they would find better correlations.</p>

<p>Also, the type of questions, “How many golf balls will fit in an airplane?” and “How many gas stations are there in Manhattan?”, when encountered in physics, are often called Fermi questions. Enrico Fermi was known for asking grad students at the University of Chicago questions such as “How many piano tuners are there in Chicago?”</p>

<p>This sort of questions does develop a useful skill in physics, which is to make multi-step estimates of real-world phenomena. I’d think this would be of value at Google, though maybe not.</p>

<p>Thank you for posting the link. Hope a few score-happy folks can see its point.</p>

<p>Rexximuns, these tests are not “perfected.” It’s closer to say, that they are what we have, all we have. They test what they test and not much more. For decades there has been criticism that, in setting the “standard” in “standardized testing,” there is slant, preconceived ideas about what should and can be tested.</p>

<p>Test scores don’t id “top ability” as much as “top willingness to play that game.” It is good to master these tests, shows that drive or ambition. That’s that. </p>

<p>It is absolutely feasible to me that the Googles have strong contributors who didn’t play the perfect stats/highly competitive college game. Some will have exhibited the talents (talents, folks, not standardized scores) early enough to get a toehold. Some will have consciously opted out of the mindless game of “A” chasing. Torn out, so to say, that bull ring in their noses. Gone on to create and apply solutions. Not get hung up on false idols.</p>

<p>Same goes for college apps. The kids who can get top scores and not translate an app question and write an informative answer, who make assumptions about the process and their mastery- because, after all they have stats excellence. The proof is in the pudding. Someone else can name the various intelligences we now recognize. </p>

<p>To suggest this percent of their employees can be explained by vintage is preposterous.</p>

<p>

</p>

<p>I don’t think we’ll see an analysis better than this one.</p>

<p>

</p>

<p>Not so fast folks. Even if they have no/low predictive ability, Google still requests transcripts for recent college grads, presumably because they thing that such things add value to the hiring process. Of course, as in all industries outside of Wall Street, several years down the line, transcripts become nearly worthless.</p>

<p>

</p>

<p>For what its worth, the author of the article has zero critical thinking skills. And like most ed/lit majors, little math acumen. (low SAT-M?) Does she really believe her first sentence above? Seriously? </p>

<p>Cited From: <a href=“The Perils of Standardized Testing: 6 Ways It Harms Learning – Open Colleges”>The Perils of Standardized Testing: 6 Ways It Harms Learning – Open Colleges;

<p>Although I am no great fan of standardized testing for children, and think that some aspects of the No Child Left Behind tests are actually harmful, I have a different view of standardized testing at the level of juniors and seniors in high school.</p>

<p>lookingforward’s view does not seem to me to take into account the students who just walk into the testing center and walk out with 2400s, with no more preparation than having read the information that CB sends out automatically when one registers. These students exist, and perhaps in larger numbers than lookingforward acknowledges. These students have not “mastered the tests,” in my opinion. To me, “mastering the tests” suggests actually working at the test material–which may not be needed.</p>

<p>There is no way to have a common footing without standardized testing.
However, there is no need for it in k - 12, I agree. The program is in such a bad shape, no testing will help, standardized or not. The level has to be brought up considerably and then “real” testing will help to provide a feedback if teachers are doing their job or not.<br>
After saying that, how in a world you select 900 kids out of 30000 applicants to fill the freshman class at selective HS if they are coming from very different Middle schools. An “A” at one Middle school might mean a “B-” at another. So, you got to use a standardised test for this. This is just one example, and yeye, my GrandDaughter got in!! I heard that they selected kids strictly based on the test score.</p>

<p>

</p>

<p>Actually, the “Educators” in California (and most other states) decide what is important and what should be tested. So if those tests “harm learning,” the educators should resign or be sued for malpractice. :)</p>

<p>

</p>

<p>

</p>

<p>Well, it might be useful to read the original interview. As far as “some guy in the staffing” who might not know anything … I’d check the credentials of that “guy”</p>

<p>

</p>

<p>[Human</a> Resource Executive Online | Building a New Breed](<a href=“http://www.hreonline.com/HRE/view/story.jhtml?id=533322196]Human”>http://www.hreonline.com/HRE/view/story.jhtml?id=533322196)</p>

<p>Fwiw, as a graduate of Pomona (and recently appointed Trustee) as well as of Yale Mgt School, it would be expected that “that guy” knows one thing or two about students scoring extremely on test scores. </p>

<p>And, lastly, I think that we can trust “that guy” makes a living analyzing hiring patterns and the resulting data generated by those practices. Of course, some prefer the beauty of idle speculation! :)</p>

<p>

</p>

<p>

</p>

<p>Juxtapose the two quotations, and you will have a good starting point to assess the real problem. Testing is truly needed through the entire K-12, but not the type of testing that has sold by some outfit from Iowa, or developed by the same educators who have shown an incredible inability to educate the younger generation. Simply stated, they fail at testing just as they fail at developing satisfactory teaching methods. </p>

<p>Blaming standardizing testing for the abysmal state of our education is not different from blaming the thermometer for the bad health of the population. It is just a facet of a society of educators that are long on excuses and short on performance.</p>

<p>Anecdotally, we probably all know kids who “test well” and kids who don’t. Sometimes that same kid gets good grades, sometimes not. Testing well doesn’t in the case of college admissions, speak to an ability to plan out work, do reading, get papers in on time, seek out professors after class, etc. In a work situation, it doesn’t predict how well that person will get along with others, be sensitive to team needs as a manager, be willing to stay late, etc. </p>

<p>It does suggest a certain level of knowledge or reasoning ability. Or maybe the ability to answer test questions quickly, or work out strategies to guess effectively, that sort of thing?</p>

<p>

</p>

<p>Huh? (Was there ever an expectation that testing would do any of those things?)</p>

<p>wrt college admissions, testing is just one of 5-6 items in the application “portfolio”. (And yes, SAT-M does a decent job of selecting out those with reasoning ability which some colleges prefer.)</p>

<p>xiggi, in general I would agree with your “thermometer” analogy. However, there are specific instances in which I disagree, having seen how various NCLB tests operate in practice.</p>

<p>In my state, science has been one of the subjects that is tested beginning in elementary school. As a scientist, I believe that the elementary school tests are actually harmful, for several reasons. The tests often have an “experimental” section. However, because the tests have to be affordable, the experiments have to be very cheap. Really cheap. The consequence is that the “experiments” tend to be poorly designed. Variables that ought to be controlled are in practice not controlled. Yet there is some pressure on the students and their teachers to observe “correct” outcomes. It is frustrating to the honest when this doesn’t happen, and a temptation to the dishonest. Beyond that, I observed from the grading rubrics that a lot of the points are awarded for behavioral compliance–e.g., does the student set up the answer in the approved fashion? The actual content of the answer (aside from having the “right” observations) garners few or no points.</p>

<p>This would probably be forgotten rapidly enough, if it were not for the fact that 10% or fewer of the students score in the top category in science in 5th grade (QMP and friends did fine, but that wasn’t even the norm at their relatively strong suburban school). I believe that statewide, fewer than 50% of the students score at a passing level.</p>

<p>I think this discourages some students who would actually be fine scientists, but who are getting an official message from the state when they are 10 that they are no good at science.</p>

<p>QM, I understand your point about a science subject. I also happen to think that not all subjects should be tested, especially not in elementary school. My personal focus would be on the “simpler” matters of reading, writing, and counting --call it math, if you wish-- and the essential subject of reasoning that encompasses all three main subjects. Unfortunately, that last issue is where, IMHO, our schools are not delivering the goods. Obviously, this is not surprising as it is also a subject that requires a modicum of aptitude and mastery to teach. As it stands, a lot of the testing is based on ascertaining the ability of students to retain information, often without much of a context. </p>

<p>And, this is why I believe that testing is needed, but not necesary the overly easy ones to administer and grade. In a way, I do understand why there is such a resistance to the testing brewing in the ranks of educators, although it often represents a negative opinion about being an indirect part of the “tested” pool.</p>