Elite college admissions is holistic. They often want to build a class of future leaders. Admissions wants the students to feed off of each other. There is no one set formula. That is why groups of admission officers often get to decide the fate of the applicant. There are some students that elites clearly want and some that they clearly don’t want but most are in the middle
Yes, I would like to see those links.
http://files.eric.ed.gov/fulltext/ED502858.pdf – Among 80k UC students, the following regression coefficients best predicted graduation rate. I listed the average across the 4 cohort groups for the 4 sample years. With similar UC GPA and similar SES, the model predicts the student with the lower SAT M+V score student has a slightly better chance of graduating. For 4th year GPA, a model without SAT M+V explains 26.3% of variance in college GPA. A model with SAT M+V explains 26.5% of variance… essentially no change.
SAT I Verbal: -0.03
SAT I Math: -0.01
SAT II Math: -0.03
http://www.heri.ucla.edu/DARCU/CompletingCollege2011.pdf – With a full model that includes academic characteristics, SES/financial characteristics, major/college characteristics, and time spent on various activities during HS they could explain 26.9% of variance in 6-year grad rate with 71.4% successful predictions. When they excluded test scores from the model, the prediction dropped by only 0.1% from explaining 26.9% of variance to explaining 26.8% of variance, with both successfully predicting graduating for 71.4% of students… essentially no difference.
http://www.ithaca.edu/ir/docs/testoptionalpaper.pdf i-- when using a model that considered HS GPA, HS rank, strength of HS schedule, AP credit hours, gender, race, and first gen; they could explain 43% of variance in GPA at Ithaca. When they also considered SAT scores in the model, the prediction accuracy increased by only 1% from 43% to 44%, with only the SAT writing section being statistically significant in improving prediction beyond the other sections of the model.
http://www.aera.net/Newsroom/RecentAERAResearch/CollegeSelectivityandDegreeCompletion/tabid/15605/Default.aspx
“The ATT indicates that there could be a small SAT effect on graduation (2.2 percentage points for a standard deviation increase in college SAT), but this does not reach statistical significance. The ATU is much smaller in magnitude and is not significantly different from zero.”
I could list more, but the point is every study I have ever seen that controls for both a measure of HS GPA and a measure HS course rigor came to a similar conclusion about the additional benefit of SAT I beyond the controls.
I have seen quite a few articles or studies that have said that the SAT writing section (not the essay portion) and AP scores are both very good predictors of college success. Yet by and large, colleges seem uninterested in the SAT writing score, and many claim that AP scores aren’t that important. I wonder why?
Lack of access for lower SES students could be one key reason.
At my D’s high school, honors classes and AP classes carried the same weight. My D finished 7th in her class. She took 10 APs where the valedictorian took far less (he didn’t want to hurt his GPA). They have since changed this so APs are now weighted higher than honors). The Val ended up attending Duke so it didn’t end up hurting him. This change hurts my younger D in HS now who has learning disabilities and takes no AP and very few honors classes.
That’s some really interesting analysis Data10 and certainly very clear with respect to the (lack of) incremental value of SAT scores beyond grades, at least in predicting graduation rate and college GPA.
Of course for graduation rate at the top private colleges specifically, there is not much variance to explain as almost everyone who enters, graduates e.g. 90% 4 year graduation rate at Princeton.
Right. And of course that includes kids accepted without tippy top scores. It’s part of the benefit to looking beyond top stats alone. Looking for the awareness and energies, resilience, etc.
@data10
I think the reason these studies show little correlation between SAT, and grades or SAT and graduation rates is that the SAT also predicts major. Higher SAT students also tend to choose a tougher major. Compounding that issue, the more difficult majors also tend to have lower gpa’s.
If the students all took the same courses, you would see a lot more correlation.
SAT is a weak predictor of major-- sociology majors at Yale have higher SAT scores than sociology majors at Southern CT State college. English majors at Harvard have higher SAT scores than English majors at Stonehill or Framingham State.
@lindagaf “I have seen quite a few articles or studies that have said that the SAT writing section (not the essay portion) and AP scores are both very good predictors of college success. Yet by and large, colleges seem uninterested in the SAT writing score, and many claim that AP scores aren’t that important. I wonder why?”
Writing works the best because everyone thinks it does not matter, and so students do less prep for it. If colleges made it important, it would not work any more. It’s the Heisenberg principle, sort of.
Obviously that’s the case blossom. I think the point was more, is there significant variation in SAT scores within Yale by major, within Harvard by major, etc.
The way I figured it, when recommending which major my son should choose in college, although he was almost certain to choose a social science rather than humanities or natural sciences or an applied program such as engineering (no such major at his college), I urged him to choose one that would take best advantage of his superior quantitative skills. Hence, economics. That opened up a larger variety of potential career lines, as well as potential future academic degree programs, than majoring in political science or sociology. I think a major in geography also has good potential in part b/c of tools in geospatial analysis. In the end, he did not choose an academic career, but no question he’s benefited from the courses in advanced math and econometrics.
@TessaR (#137) <>
http://admissions.berkeley.edu/studentprofile
For UCB, OOS admits had HIGHER not LOWER GPAs and SAT scores.
I want to thank you for those links. I have only read the first paper in detail, but to me it has the same issues that I have seen in previous papers.
But first, some background. Most of my 30 years after college has been involved in analyzing data, using techniques such as regressions and simulations, but also others as well. Before tools like MATLAB, SAS, and R became highly popular, I actually wrote statistical analysis code in the 90s that was widely used at the time, and I was a consultant to many companies, including a few Fortune 500 companies, on how to perform statistical analysis properly. I still do statistical analysis today, for a financial company.
When I have issues with academic papers, it often comes down to two things. First, many academics just love regressions. It is their hammer, and many see everything as a nail. While regressions are useful, the major problem with them is that it does not reveal the non-linear behavior that occurs in different portions of the data. When partitioning the data and doing an analysis on a subset of it, the patterns revealed by regressions are often opposite of what the actual pattern is on the larger data set. Here is one simple example. If we were to do a regression of post-graduate income to college GPAs for all college students, we would show a statistically significant positive loading on GPA. However, if we only consider the students at Harvard, we would likely see the opposite effect (the lower GPA students are often more financially successful).
This leads into my second criticism of the article, which is that they built the model using the wrong blueprints. Specifically, they performed their analysis on students that were already partially selected on the basis of their SAT scores. The ideal way to do this test would be to require every student to take the SAT, ignore it for the purpose of admission, and then analyze the GPA as a function of the SAT afterwards. Now, that might not be possible, at which point they should have basically said this is the right way to do it, but we couldn’t, so here is the best we could do. Instead, they just ignored the partitioned data problem that I mentioned above and went ahead and kept hammering until they got results which they presume are meaningful because it passes a t-Test.
The 2nd linked study controlled for intended major and still showed prediction of graduation rate was nearly identical with and without SAT score. That said, there is a correlation between SAT score and major. The degree of this correlation varies tremendously among individual colleges, depending on things like whether they admit by major/school or not, and generally if they have different admissions expectations and emphases for different majors.
The Duke study at http://public.econ.duke.edu/~psarcidi/grades_4.0.pdf looked into what sections of the major influenced switch out out of an engineering, natural sciences, economics major with attempting to control for individual courses as you described, by looking at the grade distribution of specific courses. With all controls, it found that all sections of the application they considered had some degree of correlation with switching out except for “personal qualities” . This includes things like the ratings admissions officers gave to LORs and essays. With all controls, they found that by far the most influential components of the application in this switching behavior were gender and admissions rating of HS curriculum, rather than scores. It is also important to note that even with all controls, all of these studies could only explain a minority of variation in measures of academic success in college. For example, I mentioned the UC study model could explain ~26% of variation in GPA with all controls. The bulk of the variation in academic success during college seems to depend on other things besides the metrics used in studies.
All of the studies are not ideal, some more so than others. However, when methodology is varied, they still come to a similar conclusion. For example, the 2nd linked study used a different approach than the first one that you referenced. It used a data set of 210k students at 356 colleges, with a wide variety of selectivity. At some of these colleges, most failed to graduate. At at some colleges, the vast majority graduated. Some colleges were not selective, to the point where SAT was nearly irrelevant for admissions decisions, as you described. They still found SAT score did not significantly improve prediction of graduation rate for the college and prediction of whether an individual student would graduate, beyond the rest of the controls.
Very interesting observation. That reminds me of what I observe: As couples men are taller than women in general, and yet for some subgroup, women are taller. What’s the question?
So questions have to be specific. In terms of college admissions, how to get into hypsm (let alone elites) may not be a useful question for certain people. It has to be school-specific, thus hard work for the applicant and their family.