PSAT / NMSF cutoffs: Concordance tables vs. percentiles

Hmm I see your point. The SI percentiles are not explained. I just assumed they were the user percentiles described earlier in the document but perhaps not. Amazing how poorly these reports are written. If thousands of people call CB and ask, perhaps they will issue a clarification.

@thshadow

No, they don’t . The reported PSAT percentiles are NATIONAL percentiles. The NMSF cutoffs are determined by STATE percentiles.

@GMTplus7 If the 2014 cutoff SI for State X corresponded to a national percentile of 99.4%, it is reasonable to assume that the 2015 cutoff SI will correspond to the same national percentile. It shouldn’t matter that the tests and distribution of scores are different going from 2014 to 2015.

@bucketDad
The sample set is going to be different this year. Bcs the test was administered on a school day rather than on a weekend, it’s reasonable to conclude that there was a greater participation rate in states that previouly had a low PSAT participation rate.

I’ve noticed a trend of increased overall scores in states with higher PSAT participation rates. Therefore, my prediction is that the states that previously had low participation rates and a low national percentile cuttoff will now have a higher PSAT participarion rate and a higher national percentile cutoff.

But take my prediction ability w a big grain of salt. I’m lousy at picking stocks.

@GMTplus7, do you really think that the number of eligible juniors is going to be significantly higher this year? The “Understanding Scores” report still says approximately 1.5 million. Now, of course that’s an approximation and there is a lot of uncertainty surrounding this new test. I believe 1 million more kids sat the exam (3.5 mil last year, 4.5 this year) but we are also hearing about how a bunch of juniors sat OUT at the advice of GC’s etc.

I’m speculating here but seems to me that the pool eligible for National Merit isn’t suddenly going to get bigger. The increase would come from a new interest in the ‘suite of assessments’ and that would suggest additional test-takers at earlier grades such as 10th or even 9th. For better or worse, by the time someone is a junior neither college-readiness tracking nor the common core is going to impact his/her outcomes all that much. Even if CB WAS targeting this year’s junior class for a larger pool, the reality is that juniors are going to take the exam for two reasons: NM and/or practice for the SAT. That was also the case in previous years.

My armchair (or rather kitchen stool) analysis FWIW LOL.

I have read all these threads with great interest. My takeaway is that the new PSAT is a new test with new data starting in 2015 and that any attempt to go back to 2014 data and equate it will be unsatisfactory.

As a math professor with a kid who took the 2015 PSAT, I was more interested in p.13 table - conversion of raw scores to the test score. A score of 470 in math corresponds to a 50th percentile - that’s what you get for getting 17 out of 47 questions correct. So 50% of the test takers got less than 17 questions correct.

To me, this speaks volumes about the poor math preparation of high school students. It is concordant with the large number of students taking remedial math in college.

I know this thread is dealing with the 1% or so at the top, and a lot of hand wringing about how the NMSF cutoffs will shake out. I think there is more clustering at the top since students from districts with a rigorous math curriculum will do much better on the new PSAT . My kid’s math PSAT 2015 percentile was quite a bit higher than his PSAT 2014 percentile, even after accounting for the 1 year lapse. The old PSAT past reflected math aptitude, while the current one is a reflection of how well math is taught in the schools. For college work, it’s the latter that counts. To me, the PSAT 2015 results confirm that kids with a reasonable math aptitude at a majority of the schools in the US are not being taught the math skills that they need to succeed in college and beyond.

I’m going to post 3 email messages between Jed Applerouth and me regarding his article, “Can You Trust Your PSAT Score?”

First email:

Jed,

I have a question about the paragraph below. Your article says the SI Percentile was calculated using a research sample. I reviewed College Board “Guide to Understanding PSAT Scores”. I don’t see any reference by them to the SI percentiles using a research sample. Can you please point me to where that information comes from? Why wouldn’t I assume the SI Percentile is from actual data?

I agree other data, such as National Sample Percentile and User Percentile do make that claim. But my question is on the SI Percentile.

Thanks,
(my name)

“NMSC Selection Index Percentile: When scores were released last week, educators were provided with a third set of percentiles, the “Selection Index” percentile. This was still calculated using a research sample, but the sample was limited to 11th grade students. This percentile is based on a student’s NMSC Selection Index (48-228) rather than the student’s scaled score (320-1520). For example, if a student earned a selection index score of 205+ out of 228, they scored in the 99th percentile using the selection index percentiles. If you don’t know what your selection index percentile is, you can find it on page 11 of College Board’s Guide to Understanding PSAT scores.”

Second email:

Hey (my name),
Thanks for your email.

When it comes to the projected national and user percentiles, the CB clearly states that it gathered data between Dec 2014 and Feb 2015 from its research sample of 90,000 8-12th graders:
https://collegereadiness.collegeboard.org/pdf/college-board-guide-implementing-redesigned-sat-concordance-installment-3.pdf

Pages 17 and 20 spell out the target sampling procedure.
To your point, in terms of the SI percentiles, technically there isn’t anywhere in the document that explicitly states from which sample these percentiles were derived, but in fact very little documentation about the NM SI percentile exists. There is no mention that the SI percentiles are in fact “real percentiles” derived from the October testing pool. Based on everything we have read, we believe those scores are research based as well.

The National Merit Selection Index percentile has always been described as the percentile based on juniors taking the PSAT. This translates into the same definition of the new “User Percentiles”. If the CB had the data to publish “real” (read not study based) selection index percentiles, there would be no reason for the rest of the user percentiles to be based on the study.

In fact the National Merit Selection index percentiles differ by less than 2 percentage points on average from the total user percentile. Even if they were based on the same sample, we would expect to see this because Math is a bigger part of the total score than reading and writing (Math is doubled). Any cases where the User percentile for the total score and the NMSI percentile differ significantly, the student indeed performed significantly better or worse on Math than Reading or Writing.

In a nutshell, although its not explicitly stated in the documentation, it seems the SI percentile also comes from the research sample.

Hope this helps.

Jed

Third email:

Jed,

Thanks for the response and the analysis.

I’ve contacted College Board to specifically ask whether the SI % table is based on real data from the 2015 PSAT or the “user” numbers from their research sample.

I agree with you, the 2015 CB Guide does not specifically state were the SI % Table numbers come from. Some people, like you, have assumed they came from the “user” numbers and others, like me, have assumed they came from the real PSAT scores.

In contrast, the 2014 CB Guide, specifically states the SI % Table numbers came from the previous year’s actual results.

If I can get a response from CB, I’ll let you know what they say.

Again, thanks for the response.

(my name).

@Speedy2019 Thanks for doing this. Would love to hear what CB has to say. If the SI table is as Jed Applerouth describes, I think that data is probably useless for determining NM cutoffs with any confidence.

This is a “hunch” but the percentile tables do allow for a nice range of Selection Index #'s from commended on up. I Historically, the lowest scoring states begin right at or a point or so above the commended cut-off, then progress for about 23 or so additional index points. My “hunch” is that we’ll see commended at around 200 - 202 with scores from just south of 205 up to 225. (Commended is a national percentage and pretty easy to calculate, even if we don’t exactly know how many Juniors took the test). This range allows NMSC enough wiggle room to move a state index a bit up for down depending on what it takes to get the correct number of NM’s. They won’t have that flexibility if the minimum score is 210 and the max. 220, as Testmasters recently predicted (based on concordance tables).

Based on someone’s else’s post about available reports is the increase in PSAT numbers just for what we consider the PSAT. The reports also mention the PSAT 8/9 results. Did they get results last week too? Are they part of the 4 million?

PSAT 8/9 are part of the 4 million. They received results last week also.

So, does that mean there weren’t as many new test takers for the PSAT as everyone once thought? And, the numbers will stay about the same or only increase slightly? If I have missed this discussion, please forgive me.

It appears there are around 500K extra PSAT test takers this year. I bet most of them are 8th graders and HS freshmen taking PSAT 8/9. Probably not that many 8th graders took the test in the past, but are doing so this year. Just guessing, maybe 400K of the increase will be 8th graders, freshmen and sophomores. Maybe 100K additional juniors because the test was given on regular school days.

The 100K figure is just a guess though. But guessing that the 8th graders now joining the psat party boosted the numbers.

@Speedy2019 Thanks! Since Applerouth is the only one who seems to have access to data, especially across states, it would be interesting (for us), if his analysis also held true at the 99+%.

Not the concordance, but whether the school experienced an overall increase in 99+ SIs from prior years.

Wow! I appreciate your input! Good to know

@mathprof63 wow! Thanks for the input! Good to know