<p>
The UC study mentioned earlier found a combination of SAT I scores and SES variables explained 13.4% of the variation in 1st year GPA. Mathteacher mentioned the same 13% of variation in her earlier post, referencing a study College Board published about the correlation between 1st year GPA. I certainly wouldn’t call explaining 13% of variation with the help of SES ratings “consistently predicting first year GPA.” </p>
<p>However, more important is what happens when you consider the remaining parts of the application. The Duke studies found that when you consider the remaining section of the application, the regression coefficients for test scores drop to a lower value than all or nearly all other of their evaluation ratings, implying that test scores add relatively little to the prediction of 1st year GPA or chance of switching out of a tough major beyond the information available in other other sections of the application, the ones a test optional college would use to evaluate applicants. The UC study found the following percentage of explained variation in college GPA explained with different models:</p>
<p>GPA + SES – 20.4% of variation explained
GPA + SES + SAT I – 24.7% of variation explained</p>
<p>Note that SAT only explained ~4% of variation in college GPA beyond a prediction based on only UW GPA and SES. The Duke study shows the regression for test scores drops gets far smaller when course rigor, LORs, and others are included in the prediction, suggesting SAT would improve the accuracy of the college GPA prediction by far less than the 4% reduction in the explanation of variation found in the UC study, had they also considered the remainder of the application. </p>
<p>So the studies suggest that removing SAT scores from your 1st year GPA prediction model means you the amount of variation you in 1st year you can explain drops by a small amount that is far less than 4% How can explaining such a miniscule portion of the variation in 1st GPA be called a consistent prediction? Also note that I reference 3 different studies that have been mentioned in this thread. It’s not just the Duke study that is showing relatively weak predictive ability.</p>
<p>
Most researchers do not have access to the internal ratings used by college adcoms when evaluating candidates. Instead they have access to numerical stats they do have access to such as GPA, SAT score, family income, etc; so studies usually focus on the available numerical criteria. However, as mentioned above, even just GPA and SES in the UC study was enough to show that SAT I added little to the predictive ability of college GPA beyond these 2 factors. </p>
<p>It might help to think about why we cannot explain the vast majority of variation in college GPA by looking at such stats. For example, the Duke study was only able to explain about 1/3 of the variation in college GPA. That’s notably better than the UC study, which probably relates to considering additional sections of the application instead of just GPA, test scores, and SES. However, the vast majority of the variation in GPA remained unexplained in both studies. For example, how do you predict if an accepted student has internalized reasons for achieving (and not drinking/partying/…), so he will continue to maintain a similar level of achievement after he leaves home and his parents are no longer forcing him to study and do assignments? None of the application criteria will predict this well, but I’d expect personal qualities to do a better job of it than test scores. </p>