National Merit Cutoff Predictions Class of 2017

@DoyleB I don’t think the number of questions matters for top students. They are aiming for perfect papers. They are going to get just about all the easy and medium questions right, no matter how many questions there are. The determining differences are what happens with the hardest questions, and whether the students can avoid careless mistakes (i.e., how many points they are from perfect= questions that were too hard + careless mistakes). If the test does not have any hard questions, then we are just judging who makes the fewest careless mistakes.

I agree mostly with you @Plotinus

http://talk.collegeconfidential.com/discussion/comment/19192312/#Comment_19192312
http://talk.collegeconfidential.com/discussion/comment/19191794/#Comment_19191794

Looking at the table posted earlier,
I think scores 224 and up will make it everywhere,
but for 222 and below, all bets are off.

@payn4ward, but wasn’t that pretty much the same last year (w/in one or two points)? Does this mean that the cut-offs might not change all that much?

@Mamelot Yes. It looks that way, i.e. cutoffs might not change much, for the tippy-toppy states.
I have not done the analysis for the mid range cut off states (210-215) since things get a lot more complicated with respect to the possible combination of wrong answers. So I do not know what will happen for those states.

@payn4ward not sure if this is a relevant analogy but don’t the other states need “legroom”? If the tippy tops aren’t changing much, that would indicate that the rest might not be changing all that much (aside from swings up or down a couple points, which happens every year anyway).

The easier the test gets, the more the rest of the states look like the tippy-top states. If the test is really easy, then getting one question wrong can make an enormous difference. There will still be a gap between the states, but the gap will narrow.

Once the commended cut-off is known, then one can figure out the lower end - at least that used to be the case. Any reason to conclude that the cut-offs WON’T begin around or a bit higher than commended this time?

People are still mixing up norms from one year with cuts from the next. Please be aware that when you are getting data from an “Understanding 20XX” report that while the errors and totals for each subtest will be for the kids that tested that year, the SI percentiles will not. They are from the students testing the previous year. For example, Understanding 2014 has error counts from the 2014 test (class of 2016), but the selection index percentiles are from 2013 testers (class of 2015). If you are comparing percentiles to specific cut scores, this becomes important.

@payn4ward
You said that: “I think scores 224 and up will make it everywhere,
but for 222 and below, all bets are off.”

So you think that many states will have cutoffs of around 223?

In the past there were two anchor points, the commended score ~202 and 99% 213 point.
Even if we get the commended score in April, we are not sure of 99% score point, so we cannot make predictions.

I don’t think S.I. 205 from “understanding 2015 …” data file is the “real” 99%ile.
205 is consistent with national/user %ile and could be either national or user 99%ile. Neither is the actual tester data.

@CGlynn I think the highest cut offs will be below 224. No states will have 228 cutoff. How much lower than 224, I don’t know. It will be likely higher than 220.
For mid and low range states the cutoffs will likely lie between 205 - 220.

I could be all wrong of course. I’m bad at picking stocks.
Last time I made moves, black friday and black monday happened. Last year I started investing in Asia. Look what happened in China.
Maybe everyone on CC will make nmsf this fall.

@Plotinius looks at it the same way I do. For students in high cutoff states this is largely about the ability to work very quickly and avoid careless errors. My math kid and my future writer were within one question of each other on all sections. Ability is not being measured at the high end. And because of this I could not be sure that either kid could repeat a qualifying sophomore score as a junior. One section mistimed could pull them out of contention. A stupid way to design a test.

You can see how much headroom there is from the number you can get wrong and still get a perfect scaled score. Very little.

@payn4ward

I agree that the percentiles in the Understanding Scores 2015 are not based on actual tester data so all the numbers are rough approximations (or maybe seriously off).

Just the fact that CB could write this is really eye-opening:

“Nationally representative percentiles are
derived via a research study sample of U.S.
students in the student’s grade (10th or 11th),
weighted to represent all U.S. students in that
grade, regardless of whether they typically take
the PSAT/NMSQT.”

So now CB is measuring the performance of students who DO take the PSAT against the population that includes students who DO NOT take the PSAT?? How is CB sampling this population?

By contrast, in the past, percentiles indicated the performance relative to the other students taking the test.

Another strange thing is that there is only a very small difference between the percentiles measured relative to the two different groups. For example, the 99th percentile for all juniors (nationally representative) is 1370, whereas the 99th percentile for “users” (juniors taking the test) is 1390. How many juniors are there each year who do not take the PSAT? They would pull down the 99th percentile by only 20 points?

@Pamom21, can I ask for a quick clarification of what you are saying? Why are the SI percentiles not for the same test as the raw scores? This can’t be true for the 2015 (class of 2017, the lastest) report can it. We have to either accept that the SI table distribution is accurate or made up? Or perhaps incomplete? I can understand preliminary concordance table but not really a preliminary distribution. You either have the data or you don’t. That was why I tried to derive my linear scaling based on SI distance from the mean.

Not challenging anything, just (more) confused.

@Dave_N , All bets are off for this year’s report. I haven’t looked at it closely enough to see exactly what they are doing, nor do I think they ‘fess up completely so to speak. But we were talking about Texas’ history earlier. If we are using Texas’ last year cut of 220, it doesn’t tie to any posted SI percentiles. However, if we use the percentiles in the 2014 report, that ties to the 2013 kids (class of 2015) and the TX cut of 219.

This only has to do with looking for trends, which isn’t entirely productive given the test changes, but lots of people have been referring to those historical numbers nonetheless. I have no idea if they’ve worked last year’s unpublished norms into this year’s calculations in any way.

It is worth noting though, that despite some comments to the effect of “they have the data, it’s 10 minutes of work”, that CB hasn’t ever published complete and current percentile data with their “understanding 20XX” reports.

@Dave_N

In the past, SI percentiles were based on the SI’s earned by college-bound juniors in the previous year. So, as @PAMom21 correctly notes, SI percentiles reported on the “Understanding your Scores 2014” are based on the SI’s earned by juniors on the 2013 test.

So the 64 million dollar question is: what is the basis of the SI percentiles reported in the “Understanding Scores 2015” ?

I think it is highly unlikely that CB used the SI results from 2014. That was just a different test.
Do you think CB used the actual junior data from 2015? Why didn’t CB say this in the booklet? Why hasn’t CB used the same year’s data in the recent past?

It is much more likely that CB used “sample” groups of junior test-takers to estimate the percentiles of scores. The language around the “user data” percentiles suggests this:

" User group percentiles are derived via a research study
sample of U.S. students in the student’s grade,
weighted to represent students in that grade (10th
or 11th) who typically take the PSAT/NMSQT. "

However, this could make for another good query tweet to College Board:

Were real scores or research study scores used to generate SI percentiles in “PSAT: Understanding Scores 2015”?

@Plotinus, in years past (<2011), they DID use “current” data, but only as a sample, so it was still subject to fine tuning in due time. As to why they’ve never used complete data? Who knows? Maybe the CB folks aren’t as into data as we are. They really should consider hiring me. I would have done it for free the year my son was in the running.

I think you are right in that they used a sampling this year. The question is of course, how accurate was their sample. It is sketchy that they aren’t including a blurb to explain things, however briefly, in their current report.

@PAMom21 I edited my post to account for your correction. Thanks.

I also sent the query tweet to CB.

Can we a tweet to report the real mean and stddev? Instead of the cut and paste error?

Hi Everyone – i sent a question into the Prep Scholar folks as they had estimated the state cut offs as -12 from last year- just reflecting the change in scale. I just got this response back to my inquiry:

From - Allen Cheng (PrepScholar)
Jan 17, 17:52
Hi there-
Sorry about the confusion - it is true that the concordance table presented by the College Board presents a different scaling than what we had expected.

For example, let’s try California’s 222 cutoff score from previous years, out of 240. We can estimate 74 on each section.
Now we use concordance tables to see what the section score now is:
74 → 37 writing
74 → 37 Math
74 → 36 Reading
NMSC Index: (37 + 37 + 36) * 2 = 220

The reason this is happening, contrary to our 12 point deduction, is that they’re compressing the higher end of the spectrum so top performers don’t differentiate from each other as strongly. Another way to see that is the difference between the maximum score and the cutoff score is higher now than it used to (228 - 220 = 8, 240 - 222 = 18).
We’ll be updating our guides with this analysis to be up to date. Though to be fair the College Board still hasn’t figure out its exact scaling yet, so the best we can do now is estimate.

Allen

Their blog is here - and as dated today this still has the same cut off predictions (I think as before - simply less 12 points from previous cut off): http://blog.prepscholar.com/national-merit-scholarship-cutoff-2015-2016

I imagine they’ll update their projections in the next week or so though.

@Dave_N
Did you notice that on page 18 of “PSAT Understanding Scores 2015” the Next Steps to improve Reading scores are actually recommendations for improving writing, and the Next Steps for Writing are recommendations for improving reading? This is the same printing error that was in the booklet back in September. I guess there are not enough proof-readers at CB.Or maybe CB just assumes the target audience does not actually read?