@Data10 and @SatchelSF to clarify, I realize that the NYC schools are using the scores only, but I meant the "other"options to be the other schools that are actually looking for talented URM’s (I guess privates?). My next door neighbor’s son goes to an elite private school and there are students in his school that are there on scholarship from underrepresented communities. So to @SatchelSF’s point of finding these kids earlier…I agree, and maybe help them get recognized by some of these private schools that offer scholarships. But I’m sure there are other smart, hardworking kids that might not even be considered “gifted”, but just have strong enough grades and work ethics to help them get some money from the colleges that offer need based aid, and there are a lot of them. The problem is of course that it’s usually the most elite school that offer need-blind admissions and meet full demonstrated need…and they only have so many spots.
I still think that there could be more done (and yes, maybe we start earlier) to match some poorer students from public high schools (URM’s or Not) with colleges that would be willing to help them out financially. But they have to find each other. And I don’t just mean the gifted kids that the Ivies recruit through Quest Bridge. And I know a lot of colleges have outreach programs, and that’s great. I just feel like maybe we are missing opportunities to educate the kids and their families on some proactive things they could be doing to find more opportunities in higher education.
I was helping a student a few years ago with her college applications. She had an above average GPA and a 1900 SAT. Nothing amazing but very decent. She was at a very competitive high school (her legal immigrant parents rented an apartment in one of the nicest towns in the state so that she, their only daughter, could have a good education).
When I met with her at the beginning of her senior year, her guidance counselor had very mediocre schools on her list…none of them that met full demonstrated or that were need blind. Anyway, this girl loved NYU and we went for it. She wrote a great essay. Well, low and behold she got in and got nearly a full ride based on her family’s financial needs. She is there now and is thriving.
Hadn’t the mom met me, I’m not sure where they would have ended up. She was my sister’s nanny, that’s how we met. But I bet there are more kids out there that if they had more “awareness” of some of the opportunities, they might be able to really position themselves for success.
I think as a society we are making some progress on this.
@Data10 - these schools and the debate over the test are a fascinating subject, really a microcosm of what is happening in education today in many ways. You really should read those City Journal articles I linked to somewhere upthread. As you can tell, I know these schools first hand over decades (although obviously I haven’t been as close for some time). The original sponsor of the 1971 legislation, John Calandra, was a family friend, back in an era when at least a few NYC and state politicians lived in the regular run down, working class areas that their constituents lived in. There is nothing new under the sun. I highly doubt DeBlasio will succeed in his attempt to dismantle these schools.
Skimming through the paper, it looks like SHSAT score explained ~3% of the variance in FGPA at Science. Math SHSAT score explained only 0.2% of variance. Achievement tests were a much better predictor of FGPA at Science than SHSAT, explaining 3x the variance as SHSAT. There are obviously range restriction issues, but this level of predictive ability and achievement tests being much more predictive than SHSAT suggests there are better options for predicting academic success at the high schools than just SHSAT score alone. This weak predictive ability may be a significant contributing factor to the relatively small differences between races.
Having holistic admission criteria does not always cosign minority groups to the bottom of the class more than admission by SHSAT. For example, with 133 Black students, Brooklyn Tech has by far the largest sample size among the SHSAT schools in the paper, so I’ll use Brooklyn Tech in this example. A comparison of GPA by race at Tech and at Harvard undergrad is below, in terms of SDs from the mean. Harvard GPAs are based on the 2015 Senior Survey (the most recent one I’ve seen that separated by race). Brooklyn Tech and Harvard show nearly identical SD differences by race, in spite of having completely different admission policies.
One group that does show more significant GPA differences between the high schools and Harvard and is tremendously over-represented among the bottom of the class at all 3 of the specialized high schools is males. The bottom 10% of the class at Stuy and Science is nearly 90% male. And the bottom quartile of the class is ~80% male at all 3 high schools. As mentioned above, Black students as a whole at Tech had GPAs -0.4 SDs below the mean, and Black women at Tech had lower average GPAs than women of all other races. Nevertheless, Black women at Tech had higher average GPAs than Asian men, White men, and males of all other races. In contrast, males and females have little difference in average GPA at Harvard. I suspect this difference primarily relates to the high schools not considering middle school grades in admission. Women tend to have higher GPAs at all levels, so it is likely that the admits with lower middle school grades are mostly male, and those admits with lower middle school grades are likely over-represented among those with lower HS grades. Again this is suggestive that the predictive ability can be improved by considering additional criteria beyond SHSAT alone, particularly middle school grades.
@Data10
Thanks for diving into the SHSAT study. I definitely appreciate your insights. Just a few thoughts in response to your post #1829 above.
The sex differential in GPA at all levels of education is interesting, but maybe not so relevant for this thread. There are a number of plausible reasons ranging from innate or learned differences (and greater male variance) in behavior right through to modern course design favoring female strengths and perhaps even some implicit discrimination or increased motivation by females to “prove themselves in a male-dominated world,” as I read somewhere or other. This phenomenon of greater female GPA, as you noted, is extremely widespread. I’m sure you are aware of the other major sex difference observed forever on standardized tests of cognitive ability: males will be disproportionately overrepresented at the top and the bottom, and the further away from the mean, the greater the overrepresentation. It even shows up in the SAT (of course, why wouldn’t it, as it shows up seemingly everywhere?). For instance, take a look at the upper and lower reaches of the score ranges for the 2013 SAT here: http://media.collegeboard.com/digitalServices/pdf/research/SAT-Percentile-Ranks-Composite-CR-M-W-2013.pdf. Again, greater male variance, fatter “tails,” here in cognitive ability rather than behavior. Very ho-hum to people who look at this stuff.
For the NYC specialized high schools, enrolled females did worse on the SHSAT (by about 10 points), and the study implies that despite their having achieved better grades in largely the same classes as males they go on to do worse on the SAT as well (the author has the SAT data, doesn’t publish them, but does note on p. 55 the very high 0.83 correlation between the two tests in his database). This is consistent with the author’s noting that the females achieve higher grades in most AP classes (Table 14, p. 101), but do worse on almost all AP exams; being better students does not obviously translate into increased mastery. The exceptions – where the females did better – were as expected (Psychology, English Lit., Environmental Science, Human Geography, in that order (Table 13, p. 100), conforming to stereotype even though these females were selected for some of the best and most rigorous STEM schools in the country.
Some specific quibbles and observations.
Thorndike corrections change those explained variation figures quite a bit. The author points out that after correction for restriction of range, the SHSAT alone explains 27%, 30% and 55% (!) of the variance at Science, Stuyvesant and Brooklyn Tech, respectively (p. 76). The author also notes that these are likely underestimates because of non-linearity low in the possible score ranges (p. 77). (Again, the author had the whole spread of realized scores across all 27,000+ students, so this comment is not in there by chance.) After correction, those figures are about as high as you are ever going to see for a social science (human attribute) variable.
I don’t have a real problem with the idea of adding achievement test scores, since although they are poor substitutes for measures of innate reasoning ability, they are at least objective. However, the author notes that the tests have changed since the common core introduction (his data set is from 2008), so there’s really no validation for today’s landscape. In any event, that is not going to mollify the critics of the SHSAT admissions policy. The author notes that in a constructed admissions metric (50% SHSAT, 50% achievement tests), “shifts in ethnicity [at Stuyvesant would occur], with the percentage of whites increasing from 22.5% to 25.1%; African-Americans increased from 1.2% to 1.6%; the representation of Hispanics increased from 2.6% to 5.2%; the number of Asian students decreased from 73.5% to 67.8%.” (p. 66)
Adding middle school grades is a nonstarter, as they are not reliable in the NYC school system (50+ years of “social promotion” of minority kids have left a devastating legacy). Ditto for interviews and recommendations, which will of course be massaged to meet political goals. The author himself notes that “Students with scores one hundred or more points below the SHSAT cutoff may be unable to do the work at these schools.” It’s probably no coincidence that the cutoff for the affirmative action program at Brooklyn Tech (463) is exactly 100 points below the regular Stuyvesant cutoff that year (p. 45). There just aren’t enough qualified black and Latino kids in NYC to raise the percentages appreciably, especially given how heavily recruited the best minority kids are going to be by the private and parochial systems.
I don’t understand your point. The study showed that at at least one of the specialized schools (Science), blacks achieved average FGPA higher than all other groups, and you responded with survey data from holistic Harvard showing that blacks are in fact at the bottom of the class at Harvard, GPA-wise?
Anyway, those Harvard data suffer selection and self-reporting biases (and no adjustment for rigor or course selection in a very diverse university, as opposed to a more restricted curriculum high school). Nevertheless, it’s disappointing that despite Harvard’s having access to the whole range of “holistic” criteria its black undergraduate students seem to have the worst grades. And by precisely the same amount (-0.4SD) as the single test SHSAT admission policy at Brooklyn Tech produced! (Harvard did much worse than Science, evidently.) It’s probably a good thing, of course, that realized GPA is not the sole goal of an admissions system.
Statistics is not my strong point so I apologize if I am misreading the excellent discussion. I believe the posts above are saying that females have consistently higher GPA’s across both high school and college. However, they don’t perform as well on standardized tests. Rather than indicating that grades are not a good indicator of mastery, doesn’t that indicate that standardized testing is not a great indicator of mastery? From my own personal experience, it seems as if test taking is a specialized skill of its own. I am an excellent test taker. I often amaze my kids by being to score well on their practice tests in subjects I haven’t taken. It isn’t that I know more then they do. I would not be able to perform well on an essay or oral exam, but there is a particular rhythm to these standardized tests.
I also think the standardized tests reward speed, yet speed is not necessarily the most important skill outside of the test arena.
@gallentjill that’s an excellent point and I was thinking the same thing after reading some other posts above.
There are some assumptions being made that the tests are the true indicator of intelligence or more importantly, potential success. I’m not sure this is the case. And how we define potential success is very ambiguous, as it should be.
The higher GPA’s might imply a stronger work ethic and the ability to manage projects more successfully. In reality, students are going to school to end up in a career that affords them a nice lifestyle. When you show up at work, you don’t sit there and take tests all day long. You manage projects, people, and relationships. We shouldn’t lose sight of that in this debate.
And how we measure success is also up for debate here. Yes, we can try to track things like Rhodes Scholars, CEO’s, etc, but at the end of the day, there are many other talented and smart people that are making good things happen in this world. If someone that had lower than average test scores ends up in a college ends up in a lower paying job or a less powerful job, who cares??? Are they able to support a family and give back to their community? Shouldn’t that be enough? We shouldn’t be measuring how far everyone goes…that’s not always the point.
…and again, being black explains being at the bottom of the class in and of itself (less intelligent race!) but when it’s pointed out that men make up the bottom too it must then be the way courses are designed or some hidden bias on the part of their teachers. Neither of which ever apply to black student GPAs.
My apologies for the clumsy use of the word “obviously” in my post. What I was getting at is the idea that true mastery should be evident on any reasonable measure. In other words, it’s not obvious that higher grades mean that a student is more proficient in a subject, if objective high stakes tests don’t confirm it. (This is the converse of your argument that it’s not obvious that standardized tests are a measure of mastery, if grades don’t confirm.) Lost in this reverie is the fact that both tests AND grades are constructed tasks. We can easily tailor each to favor one or the other sex.
Why should we prefer grades in a classroom setting as opposed to a high stakes test? If girls are systematically better in the classroom, perhaps because they show more conscientiousness or agreeability or some other behavioral trait, isn’t it discriminatory against boys to favor grades? (This is the reverse of the argument made about weighting quantitatively challenging standardized testing in admissions decisions.) We see this in the NYC specialized high schools versus the screened schools. The test-in schools are about 55% male, while the screened schools (which take account of grades, among other metrics) are roughly 60% girls. My personal belief is that neither tests nor grades really matter too much - it is subsequent actual performance in a field of endeavor or a subject that validates proficiency and ability; tests (and grades, which are just long-form tests in a way) are only valid to the extent they predict subsequent performance, but of course this raises the issue of what metric is appropriate to measure subsequent performance…
You imply that at the margins boys are faster than girls on standardized tests, which might be true. Who really knows? But is speed really an issue on AP tests, and, if so, why do the girls at Stuyvesant do better on the “soft” subject AP tests and the boys - despite their lower grades - better on the more quantitatively focused ones? Shouldn’t the speedier boys do better on all types of high stakes standardized tests like APs? Also, what is fascinating is that boys seem to be both a little better at the highest levels of test performance (as measured by scores) AND a little worse at the lowest levels (that’s why I linked to that SAT score chart by sex, but trust me you will see this phenomenon on practically any cognitively challenging task). I bet this differential is the case in classroom settings as well, although the restriction of grades sometimes obscures this (this was my experience in math classes in college - often it seemed that while in percentage terms girls were a little overrepresented at the “A” level, the truly outstanding student - the one who got a 95 on a test for which the mean was in the 50s - was invariably a boy, who after curving would get the same A as the 3 girls who scored in the 70s; probably on the downside as well, the very lowest scores could have been disproportionately boys!). This is very evident in math competitions, where the data are open for all to see.
@OHMomof2 - I was the one who pointed out that black students at Science have the highest GPAs - definitely not the bottom of the class. I also should have made the implicit point that having an objective admissions standard - like a single score on an objective test - helps to dispel any implication that standards have been relaxed for the students who have to actually be there. I certainly don’t think it would be easy to be one of only, say, 10 black kids entering Stuyvesant in a given year, but every one of those kids can proudly say they earned it like every single other kid in the class. I would think that counts for something. Would it really be easier to be one of 50 in a system in which the standards were lowered for your race? I have linked in the past to the fairly comprehensive law school research that shows blacks at the bottom of first year law school grades, but that’s mismatch, not any implicit bias on the part of the teachers. The grades are based wholly on blind-graded tests. Typically, students’ names do not even appear on them, so it’s hard to see how bias would enter in. Anyway, what makes you think that these schools - which are overwhelmingly progressive and do everything they can to increase minority enrollment and outreach - would then turn around and systematically and unfairly discriminate against them in grading? I’d actually think that if there is bias, it is in the opposite direction (in other words, why would affirmative action stop at the door of the admissions office?).
Well, @OHMomof2, how about Harvard as a progressive school? @Data10 posted the data showing blacks having the lowest average GPA there. Is Harvard really biased against the black students it admitted?
We seem to run into the same problems when we try to define things too much with hard numbers. The numbers don’t tell the whole story when you are talking about human beings. Never. I think that’s the point of holistic admissions, which I know some people like to make fun of, but you can’t argue that there should be more to assessing a student’s worthiness to “whatever” opportunity than numbers.
Personally, for reasons I stated in a prior post, I don’t really have a problem with the one test score for these three NYC schools. But generally speaking, we cannot just rely on test scores and hard numbers.
The Thorndike corrections attempt to estimate the predictive ability for the full population. Yes, if you admit a portion of students by lottery to Stuy who had random SHSAT scores across the full range instead of 99+ percentile scores, then it’s likely that they wouldn’t do well. I don’t question this. Nevertheless, that does not mean there is no way to predict who will do well among the 99+ percentile scoring students beyond the small 3% variance explained by how far above the admission threshold they scored on SHSAT. The author noted that he could triple the variance explained at both Stuy and Science by simply also considering achievement scores. If they looked beyond scores and considered things like middle school transcript (grades + rigor), I expect the predictive ability could be considerably higher.
Grades aren’t perfect, which is why it helps to use multiple sources of data to draw conclusions, rather than basing admission on a source of data number. For example, suppose you have 1 high SHSAT student with a 98 grade average who took rigorous advanced courses in middle school, and another student with a similar SHSAT who had a 70 grade average, did not take accelerated/rigorous classes, had attendance and homework issues, etc. The 70 grade average kid is no doubt more likely to end up at bottom of the class at Stuy .
There were only 23 Black students in the sample group at Science. This is a small enough sample size to make drawing conclusions from slight differences in average GPA misleading. Suppose 1 Black student with bad grades left Science and returned to his old HS too early to be included in the GPA calculation. If he was instead included in the sample, that 1 student would have pulled the Black average to the bottom of the races instead of the top. When a single student can completely change the conclusion, it’s not a good sample size.
In contrast, the Tech sample had 133 Black students, it’s a tremendously larger sample group, one large enough to draw more valid conclusions. And Tech had nearly identical SD GPA distribution as Harvard’s senior survey – in both cases with URMs average GPA 0.3 to 0.4 SDs below the mean. 0.3 to 0.4 SDs is not necessarily “at bottom of the class”., at least not for a large portion of the group. Instead it indicates that ~35% of the group is expected to have above average GPAs for the class… a little under the 50% that would occur in a random distribution, but still doing okay in both the SHSAT admission system and the Harvard admission system. Consistent with this, the Black student graduation rate at Harvard is 97%, which is within 1% of other races… It certainly does not appear that that having holistic admission always cosigns minority groups to the bottom of the class more than admission by SHSAT.
I think it depends on what these metrics are being asked to predict. In theory, admission to either a selective college or high school is based on a prediction of how successful a given student is likely to be. It isn’t an achievement award for past accomplishments. If girls score lower on standardized tests but do better in GPA once they are actually at college, it indicates that the test may not be a very accurate measure of what the college is actually trying to predict. Whether that subsequent achievement is due to work ethic or agreability or innate intelligence or some combination doesn’t matter.
Now, it may be that the test, while imperfect, is a better predictor then GPA alone would be given the huge variability in the rigor of individual high schools. My only point is, for admission to high school or college, the admissions criteria should be designed as well as possible to predict success in the subsequent institution.
Of course, that does raise the question of whether the institutions themselves need to be altered to better foster success in different types of students.
Good discussion on these points, @Data10. Again, the Harvard GPA data are survey data, self-reported and subject to all the selection biases inherent in the collection method, while the NYC study data are actual. A bit less than half the graduating Harvard class responded, and not every response included answers to the GPA question. As the Crimson itself noted, “The survey was anonymous and The Crimson did not adjust data for possible self-selection bias.”
Help me out here. Where are you seeing the raw data for the Harvard survey data; is there an accessible data appendix? Is there any confirmation that at least the racial groups had similar response rates? I’m not seeing how you extracted the SD info (0.25 of a grade point, assumed) from looking at the visualization under “Making the Grade” here: https://features.thecrimson.com/2015/senior-survey/
I calculated the GPA SD from the current 2018 senior survey, which does show the percentage at each GPA in units of 0.1, and applied that SD to the earlier survey that shows GPA by race. Yes, it’s far from an ideal data source and there are lots of things that would be useful to control for, but it’s the only one I’ve seen for Harvard GPA by race.
Do you think the test optional optional policy now creates a two tier system where non-minority candidates will in reality, still have to submit scores while minority candidates don’t? Or will talented un-hooked candidates also be able to gain admission without submitting standardzed tests?
I’m not asking this to condemn the policy. Honestly, I have quite mixed feelings about it. Clearly there are real issues with standardized tests. I’m just curious about what others think.
The fact that UC wants to attract more applicants is no surprise, is it? I think D15 still gets mail from UC.
But this looks like angling to get low income kids, not necessarily URM kids, @gallentjill - especially when looking at all the specific financial aid promises (tuition free under 125k, 20k summer internships for first gens, stuff for kids of firefighters and cops - mostly white I imagine - and kids of military).
It seems most Ivies and other elites have dropped writing, and subject tests. And allow self reporting of GPA and scores (until decision). All are aimed at a class level, not a race, though of course there is significant overlap.
I hate to sound cynical, but U of C seems to have made a lot of changes recently (last few years) to increase their applicant pool and to look more selective. I know it’s a fabulous school, but it seems like they still have an inferiority complex to the Ivies and are doing all that they can to make sure they don’t lose that #3 spot in US News, even though in the end I think it is easier to get in there than most Ivies, at least from the east coast.
same could be said of Harvard that dropped app essay requirements and SAT essay. Yale recently dropped SAT essay too.
U of Chicago gets the brunt of these accusations but let’s get real… all schools are looking to boost apps.
Stanford, the most selective university, still requires app essay requirements and SAT essay… but even Stanford will have to play the game to remain competitive I predict.