They go out the way to screw you over

<p>

</p>

<p>Sure, let’s say there’s a distribution that looks something like this but on a much larger scale.</p>

<p>StudentA – 35%
StudentB – 40%
StudentC – 55%
StudentD – 65%</p>

<p>StudentA had the lowest grade in the class and StudentD had the highest. Let’s say I’m the dept head and I now decide that the cutoff for an F is 34%. Nobody fails. Now let’s say I decide the cutoff for an F is 50%, now half of the students fail. I just caused half of the students to fail. The arbitrary percentile that I chose caused students to fail, not the students themselves, that’s how a curve works. If you predefine the grading system such that any score below 50% will earn a failing grade, then the students cause themselves to fail if they score under that. </p>

<p>What if everybody scores between an 80-100%? Because I decide that 50% of students will automatically fail the exam, theoretically students that got 80% of the material right would fail…</p>

<p>To highlight an even more interesting fact, I’d ask you to go read the study that I posted near the beginning of this thread. Over the roughly 30 year period of the study they found that on average, judging by SAT math scores, a student who earned an A in the Spring semester had a very high probability of failing the fall semester offering of the same class. </p>

<p>Also, like the previous poster mentioned, and like the Spring/Fall example highlights – you can’t base one year’s scores and relate them to another. One test may ask a similar concept in a way that is harder to understand. One test may just be harder than another test. Students will circulate past exams which allow them to study more relevant material. One group of students coming through the program might be stronger than another. The list goes on and on… </p>

<p>The fact is that, for whatever reason, the depts. Do decide to fail a certain number/percentage of students, this is not just because of the material.</p>