Someone mentioned the percentiles for the SI’s are based on last year’s data – I am not seeing that specified in the 2015 report although in the 2014 report it is clearly stated:
In 2014 it says "Selection Index Percentiles and Mean Score
can be used to compare a student’s performance with that of juniors.
Points to note
Reported on a scale ranging from 60 to 240, the
Selection Index is the sum of the critical reading,
mathematics, and writing skills scores. For example,
a critical reading score of 56, a mathematics score of
62, and a writing skills score of 59 would result in
a Selection Index of 177 (56 + 62 + 59).
Percentiles are based on the Selection Index
earned by college-bound juniors who took the
PSAT/NMSQT in the previous year.
But I do not see anything similar in the 2015 report about the origin of the percentiles indicated in the chart for SI’s - so I wonder if it is at least based upon a “sample” of actual test takers if not a majority of them. If so, the info should be pretty reliable to estimate many if not all of the state cutoff scores assuming the make-up of the pool of test takers is not hugely skewed this year bc it was only given on a school day. (Our school offered it to all but less than 50% of juniors took it.) I agree that learning the official SI for “Commended” status will provide an official baseline.
For 2003, which I believe, was the first year of the old test, the score reports says sample. For years 2005-2010, the scores are based on a sample of test takers in that same year. In 2012, it says the scores are based on test takers from the prior year.
2003 language Percentiles are based on the verbal, math, and writing skills scores earned by a sample of collegebound juniors or sophomores who took the PSAT/NMSQT in 2003.
2005-2010 language - Percentiles are based on the critical reading, math, and writing skills scores earned by a sample of college-bound juniors or sophomores who took the PSAT/NMSQT in 2005.
2012-2014 language - Percentiles are based on the critical reading, mathematics, and writing skills scores earned by college-bound juniors or sophomores who took the PSAT/NMSQT in the previous year.
What we need is a true SI concordance 2014/2015 and that’s the one thing the CB has not (really) provided. It’s not quite the same as the concordance charts they’ve given even thought the old SI was equal to the old whole score.
I’m having a hard time understanding why the college board is using the previous year’s scores to give the percentile data. Can someone explain this? Is it because they are trying to keep the percentiles consistent between the two test dates and so they are using the data from students who tested the prior year and did those questions in an experimental section?
@mathyone I don’t understand what you mean. When the old test was new, they used a sample of actual test takers from that same testing year. They did this until 2012. Then, they started using students’ scores from prior years.
@mathyone & micgeaux – Regarding what percentiles are being used - I had noted above: http://talk.collegeconfidential.com/discussion/comment/19181978/#Comment_19181978
I do not see an indication that 2015 percentiles are based on last year’s results. But the student reports reference a “sample” for the user percentiles so I am not sure what pool CB used for its full report - just find it hard to believe they would give SI percentiles based on data they are not really verifying for a major portion of test takers including students in typically high performing states. Wonder if Test Masters, Prep Scholar or others are also analyzing the recently released data.
The latest update from CB indicates there were 1.5 million Junior that took the tests, guess they rounded it to the nice number. Anyway, there has been many reports that this year no of students who took the test was much more than before, that means all the additional students were of Sophomore years or lower? Also we hear the test was easy, that test was on school day, hence not many took it, meaning the motivated ones likely took it. Based on the SI score percentile, for example SI of 214 being the lowest SI for 99+ percentile, does SI score percentile prove those assumptions.
One possible way to reconcile p. 11 percentiles with the CB concordance tables is if p. 11 is built based on the National percentiles (i.e. the distribution projected for ALL juniors), but the concordance tables were built based on the actual test takers. At my son’s school, juniors were advised to take ACT and think twice before taking PSAT, so many didn’t take PSAT. It’s possible that fewer people took PSAT and those who did were better prepared or had more at stake. So if p. 11 was built based on the actual test takers results, maybe it would have been in agreement with the concordance tables. And if the concordance tables are a better predictor of the cutoff scores, then the cutoffs will be very high this year.
I know the test is supposed to concord from year to year, but is there a difference between using just 2014 data as opposed to 2003-2014 combined data? Because the table says it concords to 2014 and prior.
My guess is that the percentile tables are based on a sample of actual test takers. That is what they did in 2003 which was the first year of the new test. I realize then that the concordance would be based on the prior test, but the percentiles would be based on actual.
"My only hope is that the concordance tables say preliminary while the percentile tables do not. And, if they revamp the percentile tables, do all the students then get new score reports? (I doubt that.) "
THIS IS AN EXCELLENT POINT.
Since the concordance tables don’t jive to the SI % tables,and the concordance tables are clearly preliminary, when in doubt- we should trust the SI tables.
Its just a hunch, but I have think that the delay from the December release to January release was because they were trying to figure out the concordance tables since those are the HARDEST to figure out. The % scores should be just a matter of taking today’s scores vs either the ‘sample’ of test takers or the actual test takers (whatever they decided to use). The concordance tables require them to quantify the difficulty of today’s test vs last years… which is more difficult to do.
I don’t think the CB is going to redo the % tables and republish that info -they would really have egg on their face. Therefore, the concordance tables are most likely to change. I totally agree with @micgeaux
yes, I think the CB said they would finalize the concordance tables in May - I don’t see anything making this report preliminary: https://collegereadiness.collegeboard.org/pdf/2015-psat-nmsqt-understanding-scores.pdf
so I think the SI percentiles and other info reported in the above link will stay as it is currently. Not sure how more state by state info will trickle out, if much beside various guess-timates around cut offs, until September though.
Do you think they will revise in May based on March new SAT data?
Do you think the concordance is only for colleges to use because until a few years go by, they are used to the old test and “what it means”? But for the percentile scores, they need actual test takers.
And for our purposes here, we may not care if they revise the concordance tables every day. Even if the concordance says our child would have only scored a 214 on the old test, but the percentiles say they would have scored in the 225 range (99+).
@CA1543 The concordance tables on pages 22 and forward clearly have the preliminary watermark and the word preliminary is used to describe them on page 20.
Page 11 (SI %) does not say preliminary and that term is not used to describe them anywhere. Therefore, I think they will stick, and any statistical analysis done by all the REALLY SMART people on this site should take that into account. Where there is a difference between the concordance tables jiving versus % ranking, I have more faith in the % rankings in this doc (pg 11):
Also, if you read page 20, the CB seems to want people to avoid concording in both directions (they must know their tables make no sense and don’t jive both ways) and they are saying you should preferably concord using your 2014 score to 2015. Which as we know, gives a huge range because of less granularity in 2015.
"To translate scores on the current and redesigned assessments when some students have taken
one and some have taken the other. (Consistently concord scores in one direction, preferably
PSAT/NMSQT from 2014 and earlier to redesigned PSAT/NMSQT [2015 and future].) "
Based on those p.11 values we have some bread crumbs to guide us. A 199/200 from this year = 202 from last year, 205 from this year = 213 from last year, and 214 from this year = 224 from last year based on the SI percentiles. Notice that the previous year’s cut-offs are very evenly spaced: 202 - 213=11 units and 213-224=11 units while this year’s data is skewed when carving up the data at the same percentiles: 199/200-205=5-6 units, 205-214=9 units. I am thinking that there is a disproportionate amount of scores between 199/200 - 205 this year. Even with the scaling the scores this year are skewed so the normal z score methods don’t work, they don’t take into account the relatively shortened distance to reach that 99% cutoff. Mathematically your standard deviation estimates are off when assuming normality to get those. This is why I think the attempts at concordance tables results in some of the higher estimates are flawed. I believe the sliding scale that was provided in some posts earlier is going to give the best estimates - that is what I am seeing in the data.
yes - I was agreeing - concordance tables are preliminary - might change in May.
Whereas the “Understanding Scores 2015” report just issued this week online does NOT say SI’s percentiles are preliminary - seems very unlikely to change (but also there is no info verifying they are based on a large group of the actual test-takers - seems odd not to state where the data and percentiles are really from): https://collegereadiness.collegeboard.org/pdf/2015-psat-nmsqt-understanding-scores.pdf
At some point hopefully we’ll find out the number of students (each of the Juniors/Sophomore classes) who took it in each state and how that compares to last year - I thought the numbers were higher than 1.5M referred to by the CB.