Little Evidence Found That Academic Mismatch Alters Graduation Rates

<p>@‌ Data10 Why would the use of SAT be only relevant " if colleges only considered the SAT score of applicants, and did not consider any other aspect of the application, such as GPA"? If it is relevant then why isn’t it relevant when colleges also use high school GPA to supplement the SAT?</p>

<p>Every study has shown that HSGPA + SAT = Best Predictor of Students Success. Can you name one study that concludes otherwise?</p>

<p>HS GPA alone is a better predictor than SAT alone for most high school seniors applying to college. So it makes sense for colleges to use HS GPA as the primary predictor of college performance, with SAT as the supplement, rather than the other way around.</p>

<p>

The study discussed in this thread found that SAT added a little to the accuracy of graduate rate beyond the control variables, which included HS GPA, HS curriculum, and SES. The author of the study concludes the following:</p>

<p>“The ATT indicates that there could be a small SAT effect on graduation (2.2 percentage points for a standard deviation increase in college SAT), but this does not reach statistical significance. The ATU is much smaller in magnitude and is not significantly different from zero.”</p>

<p>The Duke study at <a href=“http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf”>http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf&lt;/a&gt; found that same effect for switching out of a challenging eng/phy science/economics major. After adding controls, the contribution of SAT to the prediction of switching out of the challenging major becomes increasingly small. With the full set of controls, the contribution of SAT was smaller than the contribution of both LORs and essay, smaller than all Duke evaluation criteria categories except for personal qualities.</p>

<p>The Bates test optional study at <a href=“20-year Bates College study of optional SATs finds no differences | News | Bates College”>http://www.bates.edu/news/2005/10/01/sat-study/&lt;/a&gt; and the NACAC study at <a href=“http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf”>http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf&lt;/a&gt; found no notable difference between GPA and grad rate among submitters and non-submitters at test optional colleges, even though the non-submitters had significantly lower test scores than the submitters.</p>

<p>A better question might be if any study has found SAT I adds a notable improvement in prediction of academic success in college beyond what is available through the remainder of the application – most notably a combination of HS GPA, HS curriculum, and SES? Even studies controlling for just HS GPA and SES found SAT I added relatively little additional information. For example the Geiser UC studies found that a prediction model using HS GPA, SES, and SAT I could explain only ~4% more variation in cumulative college GPA than a prediction model using just HS GPA and SES. Had they included a control for curriculum, SAT would have almost certainly added far less than the measured 4%.</p>

<p>

</p>

<p>But isn’t that the point to begin with? The fact is that they are not all equal in background, particularly income. </p>

<p>But more importantly, while graduation is a fine outcome, I really want to know how well those mismatched kids did. What are their grades? How many started out premed/prelaw and then pulled C’s, essentially eliminating their career ideas? How many started out as an Finance/Econ major, but ended up in ‘Studies’ bcos that is where they found less harsh competition for grades, particularly quant grades?</p>

<p>Re: post #13. SoMuch, On the underperforming list is Georgetown College in Kentucky which you may be confusing with Georgetown University in DC.Thanks for sharing this article - I had never seen it. I wish I had time to research what the common attributes for overperforming schools might be. Interesting that so many top women’s colleges, Smith, Wellesley, Holyoke and Bryn Mawr are overperformers. </p>

<p>yes if you control for all other factors the sat doesn’t matter much, but I think you’d find the same that to be true for nearly everthing aside HS gpa maybe. In the end, it’s the big picture. </p>

<p>@laralei, this is a bit OT but I wanted to reply to your post. I’m a low income mom with a rising senior ( last of four kids). My first went to Cornell, (quite a few years ago and FA has changed but I think my observations still apply) and we did struggle to pay for his four years there. There was no way he could meet the university’s EFC with term time and summer earnings so our savings were depleted. However, the next two kids chose LACs (admittedly very selective) and that has been a totally different scenario. The FA is calculated based on the COA - full cost of attendance - so they figure in travel, books, health insurance, personal expenses etc. and it is more than manageable. In fact, I got a little weepy looking at the kids’ bursars bills online the other day and realizing the extraordinary generosity and educational opportunities they have. D2 at Williams has a truly full ride with no work study required, and her grant was expanded to cover the cost of health insurance which we don’t have. At many top LACs every activity on campus is free, another effort to level the playing field and eliminate socioeconomic stratification. I won’t go into details here, but none of my kids has had any issues with social ‘fit’ esp at LACs. In fact both have made more and closer friends quite quickly at college than HS. The fraternity culture at Cornell was the most uncomfortable situation any of my kids experienced. The recruitment of low income, first generation etc. students has dramatically changed the campus cultures at many selective schools. The student performances at Williams last year were a genuinely multicultural affair, really exciting. </p>

<p>Last note: Do have your child take the ACT. All mine just took ACT and none scored lower than a 33 (current HS senior) and the others better. I don’t know how truly low income your family is but you might look into Questbridge which has been a supportive resource for many students, and two of my kids got into tippy top schools through Questbridge - I don’t know if their results would have been the same without the very thorough Questbridge application. There is absolutely the recognition that low income families do not have the resources to support ECs - may not have a car, money for equipment uniforms etc, students may have to care for siblings, work a job to contribute to household. Do not despair. A high achieving low income student is in a strong position re: college admissions right now. Your child may have faced difficulties up to now because of family circumstances but he is well positioned to attend a needs blind full need college without loans and succeed. His HS record shows he is absolutely capable. If you don’t do Questbridge, just make sure his applications paint a vivid picture, that his essays reflect his whole self ( numbers will tell the academic story) and that somewhere in the application he describes the obstacles/adversity of being a low income student. His achievements are much more impressive given that he didn’t have the resources other applicants take for granted. </p>

<p>Sorry for the length of this post! I’ve become a crusader on this subject based on my kids’ amazing experience with college admissions.
Very best of luck to you and your son! </p>

<p>

The study discussed in the original post did not find nearly everything doesn’t matter when you add controls. Instead it mentions that tuition appeared to have a notable contribution to the prediction with controls. With full controls, a $20,000 difference in tuition had more than a 10x greater effect on grad rate prediction than a 1 standard deviation difference in SAT score. While the study does not list the contribution of many other controls, I’d expect SAT scores had a smaller relative contribution to grad rate than several other components of the application based on the results of other studies, such as the Duke one I linked to earlier. </p>

<p>The Duke study found that the relative contribution of different components of the application to switching out of a techy major diminished by tremendously different degrees when adding controls. For example, with all controls the criteria that had the greatest logit marginal effect was being female, with a 0.188. With no controls except race and gender, the coefficient for gender changed from 0.188 to 0.189 – essentially no change. Gender’s contribution to switching out of a techy major at Duke appeared to be largely independent from test scores, GPA, curriculum, or other components of the application; so adding controls had little impact on gender’s contribution. However, the contribution of SAT scores was tremendously diminished any time he added controls for other portions of the application. It didn’t matter what academic success measure he looked at, in all cases the contribution of SAT greatly diminished when adding controls to the point where with full controls, test scores had a smaller relative contribution than curriculum, GPA, LORs, etc. The study discussed in the original post found this same effect, with the contribution of test scores tremendously diminishing when adding controls for curriculum, GPA, and SES (among other things).</p>

<p>One of the things I learned in empirical philosophy is that the most simple and elegant theory is the best. IOW, when you have two competing theories with similar predictive power, choose the simpler one. The question for me here is simple: Why in Hades do we want to discard a simple solution (standardized testing) for something that is convoluted (adding controls to multiple variables) AND have less predictive power? What is the ulterior motive?</p>

<p>To add insult to injury, Duke admission obviously did not “add the controls to the variables” when they made admission decisions, because hooked admittees do switch out of difficult majors more than the non-hooks, significantly more so in some cases:</p>

<p><a href=“http://www.dukechronicle.com/articles/2012/01/17/unpublished-study-draws-ire-minorities”>http://www.dukechronicle.com/articles/2012/01/17/unpublished-study-draws-ire-minorities&lt;/a&gt;&lt;/p&gt;

<p>

</p>

<p>My suspicion is that GPA measures conscientiousness more while standardized testing measures cognitive ability more, otherwise combining the two would not improve predictability at all. According to Hsu and Schombert, your position would work for most majors but not math and physics, where a score of 600 on the SAT-M is required to do well.</p>

<p>

</p>

<p>Measuring basic high school math skill by standardized testing is more straightforward than measuring reading and writing skill by standardized testing, and it is not surprising that those who have enough difficulty with high school algebra and geometry that they score less than 600 on the SAT math tend to have difficulty in more advanced math for college math and physics majors.</p>

<p>

I previously linked to the Duke study you mentioned. It found the following logit marginal effects for switching out of a “difficult major”, listed from most predictive to least predictive:</p>

<p>Being Female: 0.18
HS Curriculum: -0.17
HS Grades: -0.094
Application Essay: -0.064
Application LORs: -0.063
Being a URM: 0.059
SAT Score: -0.057
Personal Qualities: 0.006</p>

<p>Note that SAT score was found to be less predictive than curriculum, grades, essay, LORs, gender, and race… less predictive than every Duke evaluation criteria category except for personal qualities. The study you referenced suggests that if you have two Duke students with all application criteria equal except for SAT score, they will have little difference in predicted chance of switching out of a “difficult major”. However, the study also suggests that if you were to look at SAT alone without controls for anything else, like Hsu and Schombert did, you would find a more notable correlations with SAT alone since students with lower SATs tend to have weaker curriculum, lower grades, weaker LORs, lower SES, etc. This lower everything group describes some URM groups at Duke (as a whole), so URMs did show a notable difference in chance of switching out of “difficult major,” even though the racial contribution alone was quite small in the table above with controls.</p>

<p>The Geiser UC studies looked at college GPA of math and physical science majors with controls for HS GPA and SES, along with SAT score. It found the that SAT I M+V were near worthless, if you have SAT II tests. Among all majors, SAT I M+V only increased the amount of variation in GPA that could be explained by 0.2% beyond HS GPA + SES + SAT II. Among math and science majors, SAT I verbal actually had a small negative contribution. So math/phys science students with lower verbal scores, were predicted to have slightly higher college GPAs (with controls). SAT writing and SAT II math were about equally predictive for math and physical sciences majors, but both had a small fraction of the predictive power of HS GPA and only were able to explain a few percent of variation in college grades beyond a simple HS GPA + SES model.</p>

<p>@Canuckguy:</p>

<p>Maybe, along with your empirical philosophy class (which I don’t knock, BTW, as I imagine it’s quite useful), you should have taken a regressions class as well and learned about multicollinearity.</p>

<p>Good job, @Data10‌ </p>

<p>The quality of the predictor variables are simply not the same in this case. With standardized testing, I can be confident there is standardized data to support it. In fact, we have a century’s worth of work in the field. It is the crowning jewel of social science; there is nothing out there that can match it. Here is an excellent summary of what we know:</p>

<p><a href=“http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfredbox2.html”>http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfredbox2.html&lt;/a&gt;&lt;/p&gt;

<p>What do we know about Duke’s methodology for evaluating some of the other predictors such as curriculum rigor? Do we know anything about their validity and reliability? Why have these predictors not made their way into peer-reviewed literature? I know for a fact that industrial and occupational psychology would love to know how they can better predict job performance without standardized testing.</p>

<p>The duke study, btw, showed that hooked admittees in general show this tendency to switch out of hard majors to protect their GPA. I knew they showed that legacies exhibit the same tendency. I am certain that athletes do likewise.</p>

<p>Has anyone updated that SA graph in the last 16 years? Interested if that has been updated.</p>

<p>

Okay</a>, a chart from 1997 without any discussion about the methodology in which numbers were obtained suggests IQ is correlated with unemployment, divorce rate, fathering illegitimate children, poverty, incarceration, and dropping out of HS. I’d expect SES to be correlated with all of those things as well. So if you control for SES and separate the SES component, the degree to which IQ is correlated with these measures decreases. The same principle occurred with the study discussed in the original post of this thread, the Duke study, the UC study, and every other study that looks at measures of academic success combined with SAT score with controls. If you control for HS curriculum, HS GPA, and SES, the strength of the correlation between test scores and academic success dramatically decreases. Of course none of these studies of college students that have been mentioned in this thread measure the linked chart characteristics, like divorce and incarceration, since they aren’t the focus of college admission goals or particularly relevant to this thread discussion. Furthermore, do you see any differences between SAT and IQ? Such as the differences in the number of students who spend weeks or months prepping for the test, differences in the number of students who take the test many times and superscore best sub-section results, or differences in test makers attempting to decrease focus on IQ-like characteristics… especially in recent years?</p>

<p>

The Duke study included Duke admissions evaluation categories. The researcher chose these criteria because Duke gave him their ratings of applicants in these evaluation categories. It’s what he had access to. The study referenced in the original post of this thread also controlled for curriculum and GPA. It chose curriculum and GPA because they were among the measures in the survey data they had access to. College Board studies also frequently reference curriculum and GPA because they are measures in the survey data college board has access to. These are all peer reviewed literature. I expect LORs and essays are rarely included because few researches have an objective rating of LORs and essays available to them. </p>

<p>

Nobody in this thread was discussing job performance, but if we are going to talk about job performance, note the vast majority of private companies in the US do not use standardized testing in their hiring process. The criteria most employers consistently rate as most important to hiring decisions involves experience and interview success.</p>

<p>

This isn’t news. In the post you replied to, I stated the same thing – “This lower everything group describes some URM groups at Duke (as a whole), so URMs did show a notable difference in chance of switching out of “difficult major,” even though the racial contribution alone was quite small in the table above with controls.” In the final sentence of the study’s abstract study the author concludes</p>

<p>“Indeed, we show that accounting for academic background can fully account for average diferences in switching behavior between blacks and whites.”</p>

<p>If you just look at race without any controls, it appears that URMs at Duke have a notable correlation with switching out of a tough major. But when you include controls, the contribution or race to switching out of a tough major becomes quite small… small enough t hat the author says “can fully account for average di erences in switching behavior between blacks and whites.” Test scores showed the same pattern as URMs, with the contribution dramatically decreasing when controlling for academic background as measured by Duke admissions ratings of applicants. In fact with full controls, the author find that the contribution of SAT scores to switching out of a tough major was smaller than the contribution of being a URM. The author said the contribution of being a URM to switching was fulling accounted for, yet the contribution of test scores was still even smaller than what the author calls “fully accounted for”.</p>

<p>

At many selective colleges using holistic admissions, there may not be any objective rating of these elements. It would be interesting to take one or two selective colleges and compare the long-term success of students they admit to students they reject.</p>

<p>

Have you seen this TED talk from Nathan Kuncel?
<a href=“https://www.youtube.com/watch?v=Gv_Cr1a6rj4”>https://www.youtube.com/watch?v=Gv_Cr1a6rj4&lt;/a&gt;
Things have not really changed, right? That is the beauty of it all.</p>

<p>

It comes from this well written article in Scientific American (p 5):
<a href=“http://www.udel.edu/educ/gottfredson/reprints/1998generalintelligencefactor.pdf”>http://www.udel.edu/educ/gottfredson/reprints/1998generalintelligencefactor.pdf&lt;/a&gt;
The information is adopted from Intelligence (the journal).
Kuncel has addressed SES in his TED talk and in his work. It matters only a little. (I think the information comes near the end of the TED talk).</p>

<p>

<a href=“http://www.psychologicalscience.org/media/releases/2004/pr040329.cfm”>http://www.psychologicalscience.org/media/releases/2004/pr040329.cfm&lt;/a&gt;
Re-centering has the effect of squeezing both tails. It makes weak students look better and the best students less outstanding. I am sure it is done for political reason.
As far as test prep goes, it doesn’t seem to make much of a difference. Another beauty of standardized testing:
<a href=“http://nepc.colorado.edu/files/Briggs_Theeffectofadmissionstestpreparation.pdf”>http://nepc.colorado.edu/files/Briggs_Theeffectofadmissionstestpreparation.pdf&lt;/a&gt;&lt;/p&gt;

<p>

I agree. Remember I said in a previous thread that I do not read too much into the data because of the limitations of the study? The data are not even representative of American college students, let alone the US population as a whole. While you may prefer to read more into the statistical analysis of said skewed data, I think it is more appropriate to interpret the study with respect to the meta-analyses available. We don’t want to fall into the trap of not seeing the forest for the tree.</p>

<p>

Very true. Another limitation of the study. This is what I meant by the quality of the predictor variables in one of my previous posts. How can those be compared to standardized testing that has been confirmed to be accurate by meta-analyses? </p>

<p>

Do you not see going to school can be defined as a “job”?</p>

<p>All in all, I don’t have a problem with much of what you said, but your environmentalist position is untenable. Main stream science disagrees with you, and they have decades of research to back them up. For some reason, popular media consistently misrepresent psychometric so lay people generally get it all wrong as well. </p>

<p>

In a quality peer reviewed study, the author doesn’t just arbitrarily list numbers. Instead he specifies how he obtained those numbers, whether they be through personal research or by quoting others. This is important because many methods introduce biases, leading to taking the numbers at less than face value. A good study describes these limitations when providing the numbers.</p>

<p>The author of the article also doesn’t list any kind of methodology about how the chart numbers where obtained, like a study would. This is particularly important when you consider how peer reviewed studies come to very different conclusions than some of the chart information. For example, the career potential section of the chart lists occupations by IQ. It lists chemists in the 125+ IQ section, suggesting that chemists require an IQ of 125+. However, studies that have looked at the actual IQ of chemists, found the vast majority of them had IQs far below this threshold. For example, the study at <a href=“Center for Demography and Ecology – UW–Madison”>Center for Demography and Ecology – UW–Madison; found ~1/4 of persons working in related natural science fields had an IQ of under 100; half had an IQ of under 110, and few had IQs above chart’s 125 minimum threshold. The chart implies managers need an IQ of over 110, yet nearly 3/4 of managers in the study I linked had an IQ of under 110. The same pattern occurred for all the other career fields. Persons working in the field have very different IQs than the chart specifies. Maybe the author of the article simply listed careers that reflected her personal opinion, without any kind of evidence. Without a methodology, one can only guess. It also calls in to question how she obtained the other information in the chart. </p>

<p>

A lot of people on the test prep forum of this site would disagree with you. If you look through posts on this site, it’s common for persons to talk about their scores increasing by hundreds of points after prep. For example, the first featured thread in the test prep forum at <a href=“How I raised my SAT score by 790 points-My story - SAT Preparation - College Confidential Forums”>How I raised my SAT score by 790 points-My story - SAT Preparation - College Confidential Forums; talks about how a student increased his SAT score by 790 with prep. Yes,most 1 or 2 day type coaching programs offer little benefit, but that does not mean that one cannot increase their score by spending weeks or months studying for the test.</p>

<p>

I’ve mentioned 4 different studies that came to a similar conclusion, including the study linked in the original post in this thread. It’s not just the Duke study – every study I am aware of with controls came to the same conclusion. Even studies without controls usually show SAT alone explains only a very small portion of measures of college academic success. For example, the UC and College Board studies referenced earlier found SAT alone could explain only ~13% of variation in college GPA. With controls for HS GPA, SAT adds far less the 13%. The UC study, found SAT I only added an extra an extra 4% of variation in GPA explained. With controls for curriculum as well it drops even further below 4%. </p>

<p>Can you find any study that says the correlation between SAT and any measure of college academic success does not decrease substantially when controlling for HS GPA and/or HS curriculum? Can you find any study that finds that SAT scores of explain the bulk of the variation in any measure of college academic success among individual students, with controls for a measure of GPA/curriculum and SES?</p>

<p>

The criteria for success in college and jobs is quite different. For example, I work in engineering. In my engineering classes, success was primarily based on scores on exams based on how well the previous weeks/months lectures were understood, and to a lesser extend on problem sets. My engineering job is more like like a group of persons spending years working towards a common goal Everyone has different skillsets and needs to work effectively together to accomplish that goal, much more so than on college exams. The closest thing to a graded event would be annual performance reviews, which grade on quite different criteria than university exams and often relate to the performance of the full group as much as they do to individuals. </p>

<p>The large differences between what is important for success in college and in jobs relates to why hiring managers tend to focus on work experience and performance, rather than school performance or standardized testing. For example, in the employer survey at <a href=“https://chronicle.com/items/biz/pdf/Employers%20Survey.pdf”>https://chronicle.com/items/biz/pdf/Employers%20Survey.pdf&lt;/a&gt; , employers in all industries said the most important factors to hiring new grads were internships and work experience, and college GPA was among the least important factor. Standardized testing of course wasn’t even included in the survey since the vast majority of employers do not consider standardized testing.</p>

<p>Here is the fundamental problem with statisticians. They try to separate data that is not separate. SAT scores are fairly predictive because they are not taken in a vacuum. Those with good scores tend to have good other aspects as well. So taken alone, the SAT is predictive. Only when you arbitrarily and artificially try to look at it as a single data point do you see a problem.</p>

<p>Not so with grades. The reason is simple. Grading differs drastically from teacher to teacher, school to school, district to district and state to state. High SAT people tend to have higher grades than do high GPA people have high SATs. </p>