A Harvard Kennedy School grad student just posted a research paper about Naviance that suggests that kids use scattergram information in ways that drive college app decisions in predictable but maybe not optimal ways.
Two main findings:
If not enough kids from your school have applied to a college to generate a scatterplot, you are less likely to apply. Therefore, kids from the same school apply to same colleges over and over again. Sounds like a "duh" finding, but she uses a nice regression discontinuity design to account for endogeneity.
Kids over-anchor on the average accepted GPA "cross-hairs", and are much more likely to apply to a school if their GPA is just over vs. just under the average accepted GPA. This over-anchoring limits kids' mental models for what the feasible set of colleges might be for them.
The corollary of this paper’s thesis would also be interesting to see:
What are the admission odds to specific universities given how many students from their high school applied in prior years?
I expect the universities would favor those high schools they are familiar with. This bias may prove to be as important as the student-specific criteria.
Naviance has a flaw because it doesn’t account for the increasing number of test optional schools. It shows the student’s SAT/ACT score and GPA for scattergrams and assumes the student submitted the score. So the results can be misleading. Plus so much of admission is holistic, just take scattergrams with a grain of salt.
^ So if the Naviance results are misleading, then student reliance on them to make application choices as shown by the researcher is an even bigger deal.
It might be interesting to see the differences among different size schools. My daughter attends a new very small school (graduating classes under 100 students), with only 4 years of application history. Many schools don’t have enough data to provide school-specific scattergrams, and when they do they are likely to be very different from those at a much larger school. As a new school, trying to develop a reputation, their guidance department is encouraging the kids to look far and wide, and is definitely not discouraging high reach applications if they think the kids have a shot. Last year saw the first MIT acceptance, and 2 got in this year EA (and one more who is an even stronger candidate, but he hasn’t shared his results).
Agree with #1. Watched guidance counselor convince high stat students with defined academic and career goals convince their students that they needed to ignore Naviance and find their ‘fit’ school list. Top students of class accomplishments and interests vary by year so there is often no recent data available.
I can’t imagine the naviance data for my school is accurate. there are schools my kid didn’t apply to that we can’t remove and our poor public school counselors barely get letters in for common app, let alone deal with kids telling them to remove schools from naviance. Our particular naviance portals sucks and seems like we need overworked counselors to really be on top of it. Like everything else, I’m sure private school naviance data is awesome and accurate.
To counter the over-anchoring problem, it might be helpful to show color-coded curved bands on the scattergrams in which, say, >70%, 30-70%, and <30% of kids were accepted. It would not be difficult for the software creating the scattergrams to calculate and map those bands.
Another helpful metric would be a concentration index for the schoool. For example, over the past three years, 26% of the graduates have attended just 2 colleges, and 76% of the graduate school have attended 20 colleges. How “bunched” is attendance at a small number of colleges?
I would say the scattergram is probably a lot more useful than the other features like the “gas gauge” which shows 34 a very good score for Harvard despite a kid with that score never getting in from our high school with that
Not sure how we’ve ended up with two threads on this. I thought this version with the link to the paper wasn’t approved. Could a moderator help us out here and combine these threads or delete one?
A Harvard Kennedy School grad student just posted a research paper about Naviance that suggests that kids use scattergram information in ways that drive college app decisions in predictable but maybe not optimal ways.
Two main findings:
If not enough kids from your school have applied to a college to generate a scatterplot, you are less likely to apply. Therefore, kids from the same school apply to same colleges over and over again. Sounds like a "duh" finding, but she uses a nice regression discontinuity design to account for endogeneity.
Kids over-anchor on the average accepted GPA "cross-hairs", and are much more likely to apply to a school if their GPA is just over vs. just under the average accepted GPA. This over-anchoring limits kids' mental models for what the feasible set of colleges might be for them.
Author’s conclusion: “Overall, Naviance leads students’ application portfolios to reflect the set of colleges with visible and relevant information. This increases four-year college attendance for low-income and minority students but deters some students from attending highly selective colleges.”
It looks like I am not allowed to link to the paper or the Twitter thread about the paper but the author’s name is Christine Mulhern and you should be able to find it by searching.
DS19’s public school is extremely diverse, in a highly educated area and quite good at sports. So in addition to the problems listed above, we also can’t distinguish which applicants were recruited athletes, legacies, URM, etc. In the end, Naviance is mostly useless for us.
the scattergrams are actually quite useful; for the elite schools there’s a sea of red "X"s and maybe one green check. They should absolutely get rid of the “fuel gauge” which says very good chances for HYPMS with near straight As and a 34 ACT which does not reflect the scattergram.
And yes many of the green checks are URMs from our public school but word of mouth if parents talk helps with that information.