How can there be so many 4.0 students out there?

<p>^or running the risk of not being objective…which is why some people actually like MC tests.</p>

<p>I like multiple choice exams. They are berry berry good to me. I just don’t think they are comparable to anything I have ever done in real life. </p>

<p>When I took my Professional Engineers test a few years ago I was extremely grateful they had changed the format to MC. Other people didn’t like it at all. But one thing for sure, at work I’ve never been given a list of possible correct answers to a technical problem and been asked to pick one. Maybe to develop different scenarios, but never anything like an MC exam. Of course, I’ve never had such a strict time limit either.</p>

<p>So maybe a composite exam - some MC and some other problems, like in AP exams, would be best.</p>

<p>By the way, I think the addition of a few fill-in probs to the latest SAT math test is a positive development. I just think there should be more, and some on the verbal side as well.</p>

<p>Just scanned the posts- several comments. Eons ago my school district and many others graded on a curve and A+ was given for exceptional work. Never had the 100 point scale so can’t “go back” to it. Current local district only gives A, B ,C… with no chance for a + or -. District does not wieght grades, either (easier for knowing what the state college system is going by)- meaning no gpa advantage for honors/AP in class rank.</p>

<p>I would find the 100 point scale giving too many fine distinctions- parents could really hassle a kid for minute ups and downs. There may be grade inflation, but the whole system has changed since my day. Instead of grading on a curve, and therefore comparing students to each other, grades are based on the absolute performance, not one’s classmates. Inflation can come in expectations for a grade. Mastering all of the material presented/expected can mean an A, deciding how much less demonstrated learning gets a B or C is where inflation can occur.</p>

<p>When I started college UW used only A, B, C… and then they added AB, BC… so the low A’s were no longer the same as better performance. Using a curve or a fixed performance for grades is still the option of the professor. As is deciding how many A’s etc to give.</p>

<p>The meaning of grades can be grossly taken in two ways- comparing one to the others taking the same class at the same time, or performing against a standard performance/material learned/work done, regardless of the others. I personally think meeting a performance standard is better than competing with others. Of course that still leaves a lot of room for determining which grades to use for various performance levels. It also doesn’t adjust for course material expectations- one intro calculus class may teach much more/less than another.</p>

<p>Grades are not meant to be a fine analysis of a student’s performance. They are structured to give a gross measure, hence the use of only a few letters. Numbers assigned to the letters are used to quantify things, such as averaging grades for a gpa and class ranks. They are meant as a tool, not the be all/end all of school performance. One reason colleges dissect them, recalculate them and mess with them in so many ways.</p>

<p>Let’s keep the ambiguity and stress the learning part of education. Think in terms of learning the material, not the grade. Years later that is what makes the difference, not any measurement.</p>

<p>

</p>

<p>There are reasonable arguments for and against grading on a curve. I am strongly in favor of it, because it tends to offset differences between teachers. For example:</p>

<p>Teacher 1 gives easy exams, and 9 out of 10 kids score 90 or higher. Teacher 2 gives hard exams, and 1 out of 10 kids scores 90 or higher.</p>

<p>If both teachers use the same curve—with a fixed percentage of students receiving a given grade, based on where they rank compared to their classmates—then the best students in each class will receive A’s, the average students in each class C’s, etc.</p>

<p>But if both teachers use the scale 90=A, 80=B, etc., then the students in Teacher A’s class will receive lower grades on average, even if they are equally talented as the students in Teacher B’s class.</p>

<p>In the latter case, on their transcripts some students with B’s in Teacher A’s class will appear to be worse students than those with A’s in Teacher B’s class, even though the opposite is actually true.</p>

<p>If both teachers use the same curve—even if their exams are not equally difficult—this is less likely to happen.</p>

<p>^Our high school gets around that by having departmental exams. Of course some teachers teach the material better than others. </p>

<p>I hate curves. I think what is important is whether you have mastered the material, not whether you are the best student in the class. If you’ve learned what the class is supposed to cover you should get whatever grade your school gives out for mastery. If there’s a class full of brilliant math students, why should the bottom kid get a C or worse? Is the kid who gets a 98 in math when everyone else got a 99 or 100 deserving of a much lower grade?</p>

<p>SATs tend to not be all that predictive of school success. This is particularly true if a college does not provide much in the way of multiple choice testing. How well performs in a demanding curriculum is more predictive.</p>

<p>I wasn’t able to read the entire thread. But does anybody know on the College Board site how to interpret the % of students with gpa over 3.74, % of students with gpa over 3.6…etc. Are these weighted, unweighted, or whatever each college decides to use?</p>

<p>My theory on this is that most colleges know that there are variations between high school grades but colleges want to diversify the acceptance pool to different high schools, therefore GPA is often overweight in respect to SAT scores for diversification.</p>

<p>“How well [a student] performs in a demanding curriculum is more predictive.” But how does one know how demanding any class actually is? My kids’ experiences in AP classes taught them that for the same course in the same year there were teachers who were easy graders and tough graders, teachers who demanded quality work and those who were charmed by brown-nosers or favored classroom pets, as well as those who were burned out and didn’t care what anyone learned. The most grade grubbing students know the teachers’ reputations and are very selective in arranging their schedules to maximize their GPA’s. In addition, among the the students with the highest grades were the biggest and most regular cheaters (trust me, you have no idea what goes on), and those whose parents wore teachers and administration down with incessant demands that their special snowflakes be properly recognized and relieved of silly burdens like deadlines. I don’t think any adcom has the ability to adequately look behind GPA’s. While standardized tests have their flaws, if I were an adcom I would still want to know why a student with perfect grades through high school had test scores so weak that they were excluded from an application. Scores may not be predictive of college performance, but when they are so out of synch with GPA that they are not sent to a college, it certainly raises questions about that GPA.</p>

<p>

</p>

<p>That’s always been my thought process as well. There should be some correlation there, particularly for students with high GPAs.</p>

<p>I have mixed feelings about the grade curve thing. A normal distribution should look like a bell curve but I work with enough data to know that if your population size isn’t sufficiently large, you won’t necessary get a normal distribution.</p>

<p>On the other hand, if most of the class has a high B/A average, then you have to wonder if there isn’t a certain amount of grade inflation and/or ‘dumbing down’ of the curriculum.</p>

<p>This OpEd piece by the President of the American Federation of Teachers in the WashPost last year makes a good argument for national educational standards. I’m sure it’s been discussed on CC before:</p>

<p>[Randi</a> Weingarten - The Case for National Education Standards - washingtonpost.com](<a href=“http://www.washingtonpost.com/wp-dyn/content/story/2009/02/15/ST2009021502025.html]Randi”>http://www.washingtonpost.com/wp-dyn/content/story/2009/02/15/ST2009021502025.html)</p>

<p>

</p>

<p>This type of approach (and you have to read the short article to understand he’s not advocating that he’s not advocating a cookie-cutter approach to teaching standards) would help take away some of the variability we see in our grading system. Right now, a student in one county can conceivably sit through a 10th grade English class and not learn (or be taught) any more than a 8th grader did in another county. Which I why I think the national exams (SAT/ACT/AP) are really the only method we have right now of comparing students across the country.</p>

<p>I would be curious to know how many A students aren’t able to score higher than the national average on the SAT. That would be a telling figure. You would expect the A student to be at a much higher percentile than national average.</p>

<p>Last year, economist Mark Perry posted about GPAs rising while SAT scores falling </p>

<p>Between 1998 and 2008, the percentage of college-bound high school seniors with a GPA equal to letter grades of A+, A or A- increased from 38% to 42%, while the average SAT scores for that group decreased by 15 points from 565 to 550 for the Reading section, and by 19 points from 578 to 569 for the Math section</p>

<p>[CARPE</a> DIEM: Rising Grades, Falling Test Scores for HS Seniors](<a href=“http://mjperry.blogspot.com/2009/06/rising-grades-falling-test-scores-for.html]CARPE”>CARPE DIEM: Rising Grades, Falling Test Scores for HS Seniors) </p>

<p>I posted about this on CC at the time, so I’ll just paste what I wrote then:</p>

<p>I see grade inflation at our local public schools, at least in the sense that they reward students for some things that may not have much bearing on whether a high SAT-scoring student will succeed in college. I’m talking about things like poster projects, neat note-taking, group discussions that degenerate into off-topic blather fests, busy-work assignments, etc. If a student is willing to do all these things that are necessary for high grades, he stands a better chance for college admission than a student who does not. However, are these high grades always a measure of academic excellence or do they sometimes simply measure a high tolerance level for bull sh*? Or both?* <a href=“http://talk.collegeconfidential.com/parents-forum/735744-gpas-rising-while-sat-scores-falling.html[/url]”>http://talk.collegeconfidential.com/parents-forum/735744-gpas-rising-while-sat-scores-falling.html&lt;/a&gt;&lt;/p&gt;

<p>I’ll add that I’ve lately observed a lot of extra credit opportunities being given to my kids that would enable them to pump up their grades. And, just to add the gender issue here, my D is much more likely to do the extra credit assignments than my S is.</p>

<p>

</p>

<p>There is some dispute about this, having to do with the self-selection bias usually inherent in measuring college success. Po Bronson wrote about this recently:</p>

<p>It turns out that an SAT score is a far better predictor than everyone has said. When properly accounting for the self-selection bias, SAT scores correlate with college GPA around 67%. In the social sciences, that’s considered a great predictor.</p>

<p>[In</a> Defense of the SAT - NurtureShock Blog - Newsweek.com](<a href=“http://blog.newsweek.com/blogs/nurtureshock/archive/2009/09/18/in-defense-of-the-sat.aspx]In”>http://blog.newsweek.com/blogs/nurtureshock/archive/2009/09/18/in-defense-of-the-sat.aspx)</p>

<p>Some more details:</p>

<p>*It’s commonly said that the SAT, taken in a senior year of high school, has only about a 40% correlation with a student’s freshman year college GPA. …</p>

<p>I’ve always had a skeptical feeling about the 40% correlation statistic, and so I’ve never relied on it or used it in print. There are two self-selection problems that make it really hard to control the data. First, high schoolers of diverging abilities apply to different schools–the strongest students apply to one tier of colleges, and the average students apply to a less ambitious tier, with some overlap. Second, once students get to a college, they enroll in classes they believe they can do well in. Many of the strongest students try their hand at Organic Chemistry, while more of the less-confident students take Marketing 101. At each of these colleges and courses, students might average a B grade, but the degree of difficulty in achieving that B is not comparable.</p>

<p>Many scholars have attempted to control for these issues, looking at data from a single college or a single required course that all freshman have to take, and their work has suggested the 40% correlation is a significant underestimate. I’ve long wondered what would happen if an economist really took on this massive mathematical mess, on a large scale, harvesting data from a wide selection of universities.</p>

<p>Finally this has been done, by Christopher Berry of Wayne State University and Paul Sackett of the University of Minnesota. They pulled 5.1 million grades, from 167,000 students, spread out over 41 colleges. They also got the students’ SAT scores from the College Board, as well as the list of schools each student asked the College Board to send their SAT scores to, an indicator of which colleges they applied to. By isolating the overlaps–where students had applied to the same colleges, and taken the same courses at the same time with the same instructor–they extracted a genuine apples-to-apples subset of data. </p>

<p>It turns out that an SAT score is a far better predictor than everyone has said. When properly accounting for the self-selection bias, SAT scores correlate with college GPA around 67%. In the social sciences, that’s considered a great predictor.*</p>

<p>A bunch of random comments …</p>

<p>To the OP … there are about 30,000 high schools in the US so yes there probably are a ton of 4.0 students in the US (assume many per school)</p>

<p>There are about 3000 colleges so the idea of HS recalculating their GPAs for each college’s rules is pretty much a non-starter … it would be an impossible burden to place on schools … especially given how complex this (how do you treat honors and AP courses, do you count gym/sports/band/, do you restrict your GPA to core courses, do you restrict your GPA to a set number of core courses, etc).</p>

<p>Finally, to me logically national standards would clearly make things a lot simplier … but I can not imagine that happening in my life-time … local control of education is too imbred in our culture and conservatives will revolt before allowing this. I still think it is the way to go … we, as a society, seem to look at things as black and white instead of being more nuanced … I think national standards would be great about GPAs and HS transcripts … that does not mean the feds are taking over your local school board it would just mean all transcripts would have some standard items and calculations.</p>

<p>When grades are given it’s always within the context of a school. Even if we all come to an agreement as to what each letter grade equals, an A at school X is going to mean differently than an A at school Y. Again, you are not going to be able to compare students from school X vs school Y purely by grades. The only way to do it is by looking at each school’s profile, short of that adcoms would need to look at each high school’s ranking and its history with the college. If adcoms is doing that then it doesn’t matter there is grade inflation, because it’s all relative anyway.</p>

<p>

</p>

<p>That’s interesting. Of course, I don’t think many people (at least not me) are advocating completely disregarding standardized testing in college admissions. The point is that now it seems like some people not only want to consider the test scores as equally important to high school performance, but they also want to use these scores to “norm” the high school grades. In effect giving double credence to these test scores. </p>

<p>I’d be interested in seeing a similar study on how class rank (as opposed to GPA) correlates overall with college performance.</p>

<p>On the whole topic of grade inflation…We have a new high school in our district. The principal (I call Mr. Pollyannabo) always brags that his high school has the highest average GPA in the county. He thinks that this is because the brightest kids attend his school. The school has only been open for three years and the first graduating class graduates this year. However, the AP scores for the school are terrible, and our school system has been very reluctant to release any AP score breakdown. One girl in our neighborhood had A’s all year in her AP history class and got a 1 on the test. She was not alone. To me, what this indicates is grade inflation. Students are not learning material, but are getting (being given) good grades based on the craft projects and other work they do in and out of class. Nobody is talking about SAT scores either.</p>

<p>And at the other end of the spectrum, S2’s pre-IB 9th grade English class was working to a much higher standard than S1’s AP Lang course. My kids took these courses during the same school year and read several of the same novels, so it was fascinating to see that what earned S1 an A in AP would get S2 a B- or worse.</p>

<p>There is a reason we suggested that our kids include AP scores in the additional info section on apps, whether the schools wanted to see them or not, so that they could establish some context for whoever read his app. We were seriously tempted to send one of S2’s “B” essays to schools just so they could see the level of work demanded by his program, and that his excellent SAT CR/W scores vs. grades did not reflect slacking or laziness. Au contraire.</p>

<p>We also have friends in another part of hte country whose grades were bumped up to an automatic A if the student got a 3 or better on the AP. That got rid of a couple of Cs right there!</p>

<p>The material tested by SAT and ACT is very basic. Anybody with a 3.5 or better GPA should be able to score a 32 on ACT. If they are not, then their GPAs are highly inflated. It only takes a weekend of preparation to go over the basic concepts and prepare for these tests.</p>

<p>SAT subject tests and/or AP subject tests should be mandatory. They test a student’s understanding of the material and will cut through all the grade inflation. I can’t believe that most schools don’t even ask for subject SATs. How else can you compare students from across the country?</p>

<p>“The material tested by SAT and ACT is very basic. Anybody with a 3.5 or better GPA should be able to score a 32 on ACT. If they are not, then their GPAs are highly inflated. It only takes a weekend of preparation to go over the basic concepts and prepare for these tests.”</p>

<p>Hmmm… I’d like to find out how you calculated that.</p>

<p>32 is 99th percentile. What’s your rational for saying only 1% of people should get above a 3.5?</p>

<p>But beyond that, I think your GPA should be a reflection of your effort, not a reflection of your intelligence. I don’t see why grades and such should be manipulated so that a high grade correlates to a high test score.</p>