Little Evidence Found That Academic Mismatch Alters Graduation Rates

<p>While the study of genetics is interesting, I find that those who try to ascribe various traits to genetics to too often be close-minded and dogmatic. It’s as if that’s a religion that they <em>must</em> believe in. Yet we now know that environment affects gene expression in almost everything and every manner. </p>

<p>

College admissions does not look at SAT score alone, in a vacuum. Instead they at least consider GPA, and often consider LORs, essays, personal qualities, etc. For college admissions, the relevant criteria is how well the combination of the admission criteria predicts academic success, not SAT alone. Test optional colleges have a different model in which they consider how much the academic success prediction changes without SAT. Note that this is exactly the case covered in the studies that control for other sections of the application. They look at how much predictive power is lost if you consider all other aspects of the application except SAT. Bates has a good summary of how academic success and prediction of academic success changed upon going test optional at <a href=“20-year Bates College study of optional SATs finds no differences | News | Bates College”>http://www.bates.edu/news/2005/10/01/sat-study/&lt;/a&gt; . Note that they state:</p>

<p>“Testing is not necessary for predicting good performance; the academic ratings assigned by Bates admissions staff are highly accurate for both submitters and non-submitters in predicting GPA.”</p>

<p>Suppose one was to look at the correlation between height and switching out of an engineering major. The studies mentioned in this thread suggest one would find a correlation – as height decreased, chance of switching out of a engineering major increases. However, when you control for gender (and possibly athletic status), I’d expect this correlation to disappear. Among students with a similar gender and athletic status, height stops having significant predictive power for switching out of an engineering major. The controls make it clear that height isn’t the driving force behind this correlation. Instead they show that the driving force is gender (and possibly athletic status). Adding controls leads to a completely different and more accurate conclusion.</p>

<p>It’s a similar idea with test scores. Many studies have been mentioned in this thread showing how much predictive power is lost when you do consider everything except test scores, and it is not very much. This suggests that like the height example above, the driving force in the predictive power was not test scores. Instead test scores correlate with other factors that had a greater influence. Controls aren’t evil. They increase knowledge and understanding of how and why variables relate to one another.</p>

<p>

Unweighted HS GPA alone usually explains more variance in measures of college success than SAT score alone. For example, the Geiser UC study found GPA alone explained 20.4%, while SAT I alone explained 13.4%. </p>

<p>I expect your rule about high SAT being more likely to have high GPA than high GPA have high SAT only exists because you are defining “high” inconsistently between GPA and SAT. For example, suppose you define “high” as the top 20% of the sample group. So person with a “high” GPA have a GPA in the top 20% of the group, and persons with a “high” SAT have a SAT in the top 20% of the group. I’ll use Parchment members who apply to UCLA as an example: </p>

<p>Among students with a high (top 20%) GPA, 41% had a top 20% SAT, 60% had a top 40% SAT, and 77% had a top 60% SAT. </p>

<p>Among students with a high (top 20%) SAT, 41% had a top 20% GPA, 57% had a top 40% GPA, and 80% had a top 60% GPA.</p>

<p>High GPA applicants had a 41%/60%/77% percentage in the different SAT groups. And high SAT applicants had a 41%/57%/80% percentage in the different GPA groups. The percentages are near identical in this Parchment UCLA applicant sample group, suggesting that high SAT applicants are no more likely to have a high/med/low GPA than high GPA applicants are to have a high/med/low SAT.</p>

<p>

</p>

<p>If you believe Tetlock, the results are likely to be disappointing:</p>

<p><a href=“Everybody’S an Expert | The New Yorker”>Everybody’S an Expert | The New Yorker;

<p>Quote:
In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate.</p>

<p>

</p>

<p>Strawman. They never claimed that. Here is a piece from the good old days. I find their position nuanced and cautious. It is their critics that are dogmatic and often hysterical.</p>

<p><a href=“http://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf”>http://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf&lt;/a&gt; </p>

<p>

</p>

<p>The author was writing for lay people. It appeared in Scientific American. Here is the academic paper equivalent from the same author that answered a lot of your questions:</p>

<p><a href=“http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf”>http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf&lt;/a&gt; </p>

<p>

</p>

<p>Is variance not the sum of squared deviations? It may be expedient to use because you can add variables that are not correlated, but it also creates perceptual distortion (in terms of size, that is). Don’t you think it would be better to talk in terms of SD? </p>

<p>

</p>

<p>I would prefer to look at what the most sought-after jobs are looking for. Jim Manzi talked about how Bain and BCG recruit, and they most certainly use SAT scores and GPA. He also said “A GPA-plus-major screen is not about IQ, as much as it is a quick screen to see who is capable of figuring out how to succeed in a new environment, and of doing at least some sustained work.” </p>

<p><a href=“http://www.nationalreview.com/corner/285160/how-elite-business-recruiting-really-works-jim-manzi”>http://www.nationalreview.com/corner/285160/how-elite-business-recruiting-really-works-jim-manzi&lt;/a&gt;&lt;/p&gt;

<p>Google’s Laszlo Bock was saying that the number one thing they are looking for is “general cognitive ability, and it’s not I.Q. It’s learning ability. It’s the ability to process on the fly. It’s the ability to pull together disparate bits of information.” </p>

<p><a href=“http://www.nytimes.com/2014/02/23/opinion/sunday/friedman-how-to-get-a-job-at-google.html”>http://www.nytimes.com/2014/02/23/opinion/sunday/friedman-how-to-get-a-job-at-google.html&lt;/a&gt;&lt;/p&gt;

<p>Talking about doublespeak! Reading between the lines, they are looking for intelligence, and standardized testing is the best way to find it, pure and simple.
All told, I think we just have to agree to disagree. It has been a pleasure though.</p>

<p>@Canuckguy:</p>

<p>Depends on who you read. I’ve read from a bunch of insecure authors, I suppose.</p>

<p>Also, what would be more interesting is looking at what method predicts success in life after college (instead of just college GPA) better.</p>

<p>HS GPA is a poor predictor of income after college, if I recall correctly (socioeconomic status is a strong one; trumps test scores, if I recall correctly–especially during major recessions/depressions).</p>

<p>

The paper mentions the actual numbers listed in the chart were copied from the book The Belle Curve. The book got the numbers from looking at life outcomes of persons in the National Longitudinal Survey of Youth who took the Army Vocational test a few decades ago, and estimating IQ based on scores on this army test. Other studies have found that Army vocational test results in this NLSY sample group have tremendously stronger predictive power than IQ. For example, the study at <a href=“http://home.uchicago.edu/~tkautz/Heckman_Kautz_2012_Hard%20Evidence.pdf”>http://home.uchicago.edu/~tkautz/Heckman_Kautz_2012_Hard%20Evidence.pdf&lt;/a&gt; compares the predictive power of 9th grade GPA, IQ, and Army Vocational Test scores to various measures of life outcomes. While none of the measures were strong predictors of life outcomes, the army test was by far the most predictive of life outcomes among these 3 measures. 9th grade GPA was next in predictive power., and IQ was least predictive. I’m not familiar with the use of the Army Vocational test during this period. If it was primarily given to persons interested in joining the military, rather than the overall population, then the sample group biases may have more to do with the greater predictive power than the test itself. Some specific numbers are below:</p>

<p>Predictive Validity in Female Earnings at Age 35.
IQ – 1%
9th grade GPA – 5%
Army test – 9%</p>

<p>Predictive Validity in Male Earnings at Age 35
IQ – 7%
9th grade GPA – 9%
Army test – 17%</p>

<p>

R^2 is the most common measure used in statistics for variation and the format used in the referenced study. If you want to instead use the square root, then the study found that adding SAT I increased R by 4.5% in explanation of college GPA over a simple HS GPA+SES model, instead of SAT I explaining 4.3% of additional variation/variance/R^2 in college GPA over a simple HS GPA + SES model. </p>

<p>

Management consulting has unique recruiting methods from the vast majority of other fields and is far from the most sought after jobs for the vast majority of grads. For example, Harvard’s senior survey at <a href=“http://features.thecrimson.com/2014/senior-survey/”>http://features.thecrimson.com/2014/senior-survey/&lt;/a&gt; found that fewer than 1% of graduates planned to be doing management consulting in 10 years. Almost all grads look for a career in other fields, although 14% did currently work in management consulting, perhaps starting there until something better comes along. </p>

<p>

Counselors looking over a personality test and independently guessing at how it influences college GPA is not equivalent to how a selective college admits applicants. Admissions decisions may seem random at times, but holistic colleges have policies in place, and I’d expect those policies to be reasonably repeatable for different groups of admissions officers at the same college. The colleges that we tend to focus on have internal studies documenting how good a job they are doing at predicting academic success and what factors contribute to this success. Several colleges have referenced internal formulas for predicting measures of academic success, such as the Bates study I quoted in my earlier post – “the academic ratings assigned by Bates admissions staff are highly accurate for both submitters and non-submitters in predicting GPA”. Colleges also look at many measures of success besides college GPA and graduation rate.</p>

<p>@PurpleTitan‌ Remember Steven Jay Gould? He was a darling of the media for his book “The Mismeasure of Man”. Unfortunately his peers thought of him differently:</p>

<p><a href=“Study Debunks Stephen Jay Gould’s Claim of Racism on Morton Skulls - The New York Times”>Study Debunks Stephen Jay Gould’s Claim of Racism on Morton Skulls - The New York Times;

<p>I read, some years ago, Arthur Jensen’s response to the book, and even I could tell that Gould did not have the mathematical tools for the job. It was embarrassing, but the general public was, and still is, unaware of it.
Your ambivalence is understandable. I think this reporter felt exactly that when she interviewed behavioural geneticist Robert Plomin:</p>

<p><a href=“http://www.spectator.co.uk/features/8970941/sorry-but-intelligence-really-is-in-the-genes/”>http://www.spectator.co.uk/features/8970941/sorry-but-intelligence-really-is-in-the-genes/&lt;/a&gt;&lt;/p&gt;

<p>@‌Data10 I mentioned variance for the same reason I mentioned that major switching affects all weaker students. You know it of course, but a lot of people thought it only affect URMs. A lot of people are also not aware of the fact that variance is the sum of squared deviations, and are fooled by “size perception”.
Since you are so certain the orthodoxy is wrong, and you are so sure of your analysis, why not write an academic paper and submit it to a major journal? Who knows, you can be another Zhang Yitang:</p>

<p><a href=“http://nautil.us/issue/5/fame/the-twin-prime-hero”>http://nautil.us/issue/5/fame/the-twin-prime-hero&lt;/a&gt;&lt;/p&gt;

<p>What a beautiful story that was.</p>

<p>

The study mentioned in the original post of this thread found that SAT was somewhat predictive of graduation rate without any controls, but when adding controls including a measure of GPA, curriculum, and SES the predictive ability of SAT become quite small… small enough that the author called it not statistically significant. I’m mentioned many other studies that came to a similar conclusion as the original post’s study. I’m not aware of any study with controls for a measure of GPA/curriculum and SES that has come to a different conclusion. This should not a controversial result, nor should it differ from the orthodoxy. </p>

<p>The bivariate correlation between SAT midpoint and graduation rate is +.82 based on 1169 colleges and universities. The bivariate correlation between net price and graduation rate is +.46 based on the same schools.
Graduation rates are clearly more closely related to student ability as measured by SAT than they are to actual price.</p>

<p>Net price and tuition are only indirectly related to graduation rate, not causally. Raising tuition would, in fact, probably cause graduation rates to go down because of debt burden.</p>

<p>On the other hand, it seems reasonable to expect graduation rates to go up if the school becomes more selective. </p>

<p>The correlation between SAT and tuition is +.46. Between SAT and net price is +.35.</p>

<p>Sounds like there was some pretty bad thought behind the article referenced in the original post.</p>

<p>Controlling for othe variables can lead to spurious results. Controlling for other variables, I could probably show that there is no relationship between height and weight in children.</p>

<p>(I came to this thread rather late…have not read every post.) </p>

<p>Indeed, a student’s inability to socialize due to the lack of money can become an additional barrier to successful graduation. Additionally, trying to combine work and study, a person would need additional support to continue struggle. Therefore, the studies under discussion might have certain limitations and the outcomes can be different in different cases, depending on every specific situation and not only the income and academic skills, but also communication skills and environment.</p>