Whatever, @epipany.
Based on the abysmal GRE’s of those attending most ed. schools, criticizing someone for not being an educator seems a bit trivial.
Whatever, @epipany.
Based on the abysmal GRE’s of those attending most ed. schools, criticizing someone for not being an educator seems a bit trivial.
So no one can even make up a reason for inflating the scores that’s not cynical nor short-sighted??
@bucketDad yeah, those are the limits of my understanding of how to determine percentiles and apply them to many people. CB likely effed it up in some way or they just need a greater sample. I do know that CB’s goal carried over from the old SAT is to have the scores derived from something overall, not just that one test.
Being a highly critical educator of other educators myself, I don’t care whether it appears to you that I am “trivial.” But thank you. I will criticize anyone who poses or pretends to be, do, or say what he or she shows to be unqualified to do or say. He’s not an expert in what he claims expertise. He does not appreciate the breadth of what is asked of students in college in the way of critical thinking. He admits that his own biases and preferences are centered around “data” and reading and interpreting data.
So, you know, whatever.
@epiphany YOU are not trivial - at least I hope not. Whether David Coleman happens to be an “educator” (whatever that really means) seems to be the least of the issues surrounding the new SAT which is why that seemed rather a trivial point. Or irrelevant at best. Those trained by the schools of eduction are not the keepers of specialized information in establishing or conducting standardized testing. And what is the basis for your assertions regarding the guy? He actually refutes that his methods introduce “bias” - he says quite the opposite - and certainly as an educator (and an educated person) you are hopefully appreciative of methods that rely on data. Perhaps you just don’t like the test and he rubs you the wrong way? If so, you are not alone. Time will tell whether the class of 2017 doesn’t get into college as a result of this test - and THAT will be a legitimate complaint (should it happen - which it most likely won’t).
@thshadow what would be the purpose of deliberately inflating scores, especially if you provide a helpful “Score Converter” to promptly deflate them? The subject of why they did what they did is interesting (we all seek rational explanations) and it’s cathartic I guess to be cynical . . . but unless it fits the fact pattern it’s not answering your basic question which is: Why?
I have a even more basic question which is what percentile tables were they referencing to “equate” new scores with old? Did they rely just on the actual March and May tests? But then aren’t those tests “normed” using the Study Group? Everything “new” has to reference back to the Study Group but then we know that those are inflated and inaccurate hence the Score Converter . . . .There seems to be a circularity problem. I think @bucketDad and @triedntested had this question as well.
Now that May is more than halfway over, where is this revised PSAT concordance table that CB was supposed to release in May? Sheesh, the test date was back in Oct 2015-- 7 months ago.
@GMTplus7: SNAFU
@GMTplus7 the cynic in me thinks they couldn’t start the PSAT concordance table until they had finished the SAT concordance. They needed to know what answers they wanted to get first.
The concordance table is an absolute joke for the new SAT and lowers my score by about 100 points. I don’t think it was any easier than the old format, and it’s ridiculous the scaling they have interpreted.
/
@thshadow Totally unfounded speculation comes naturally to me. Here’s a made up reason: the old SAT didn’t do a scientifically valid job of distinguishing between, say, the top .05% and the top 0.1%. By compressing the scores at the top of the scale, the new SAT no longer gives the false appearance of teasing out such fine differences in student performance at the top end. In order to compress the scores at the top, other scores needed to rise as well.
Or maybe you meant no one at CB can make up a reason for PR purposes?
Conspiracy theories have been flying since the PSAT scores were first delayed last December and it’s made for some great entertainment. :D/
The inflated percentiles the CB published for the PSAT are unconscionable and not a conspiracy theory. Therefore everything the CB has published subsequent to that is questionable and suspect.
I think the compression is intentional and not particularly dishonest. If you look at materials for talent search groups for children (Duke, Johns Hopkins, Northwestern) they state (at least Hopkins does) that grade-level standardized tests do a poor job of differentiating gifted students. They find that, when you take the top 5% of kids who test at grade-level and move them up three grades, instead of getting the straight-line decrease you might expect, you get another normal curve. The kids at the far end of that curve are who they are after. Similarly, I’ve noticed the same thing in math testing SAT --> AMC --> AIME --> USAMO at each level you get a new normal distribution (or something close). So, knowing that their attempts to differentiate between the top 1% and the top .5% aren’t going to be very useful, the College Board has flattened the curve by compressing the ends and stretching out the middle. If the differentiation in the middle is accurate, there is probably more value in providing more differentiation there than at the ends.
That said, the percentages on the PSAT concordance are an entirely different thing. The PSAT percentages were clearly off, whether intentionally or accidentally. Their “Study Group” did not match reality. However, the College Board did another study for the SAT after they had the results from the PSAT in December of 2015. I think the anecdotal evidence we are seeing suggests that maybe the concordance tables actually swung the other way. I’m surprised at how many people have said that they did worse on the March SAT than they did on the October PSAT. I think, at least on the top end, that the March SAT was harder than they expected and the published concordances may be harsher than they should be. Sadly we will never know the truth because the College Board will never publish the results from March only.
Does anyone think they will have made an adjustment for the May SAT? How could they do that without being unfair to the March test takers?
@glido If there’s an adjustment to be made, I should hope it would be with the concordance. One would think that a score/percentile combination for the March test would be the same for the May test. If not…yes, it’s unfair.
Thanks @VryCnfsd and @candjsdad for your replies. The idea that “they can’t really distinguish between the top kids anyway” so they compress their scores together is reasonably logical, and not too cynical…
On both the Fall PSAT and the new SAT score report CB indicated that a score is +/- 40 pts. I can understand why it is frustrating to come out on the low end of the range going from PSAT to SAT, but it seems like the scores others are reporting are at least close to the +/- range. I think describing the CB’s actions as unconscionable is to strong and I have a hard time thinking CB tried to dupe kids into taking the SAT by gaming the PSAT score. I respect others rights to see things differently.
I think studying helps. DS did not prepare for PSAT at all and studied for SAT - his score went up over the band range.
Interesting thoughts by @candjsdad . From what I am reading about the admission process, it seems consistent with how schools are looking at the standardized scores anyway. For example, I think I have read on this website or a link that MIT admissions accords little difference between a score of 750 and 800 in math.
Good luck to those re-taking!
@candjsdad your post at 153 is very helpful! Do you think they are going to tweak that upper end of the PSAT distribution with questions that might be more differentiated in terms of difficulty, in order to offset the natural compression that results from the scaling methodology? Maybe more 4’s and 5’s (in terms of difficulty) and fewer 1’s - 3’s (using the old test as a benchmark). That would explain the study group in Dec. as well. And obviously they didn’t want to repeat the mistakes of the PSAT distribution.
@bosdad Have you read about the PSAT percentiles? They were HUGELY inflated. Why would the college board publish HUGELY inflated percentiles? Not for the benefit of students, that’s for sure. Compass Prep has a great article about the inflated percentiles.