@ ucbalumnus,
I’m sure you are a robotic search engine and I thank you! Thank you UCB!
About forty years ago, the SAT people would send “validity studies” to participating colleges as a standard operating procedure. The reports I saw tested the college students’ freshmen, sophomore, and junior year college GPAs against a battery of possible explanatory variables to measure the strength of relationships. The tested explanatory variables available at that time were secondary school GPA, verbal SAT, math SAT,and math achievement (MACH) . As this was an old style STEM school, the verbal SAT did not indicate a significant relationship with the college GPA. The (MACH) score was co-linear with the math SAT score, but the MACH had the stronger influence. The final results indicated that the GPA variance at this STEM college (largely first generation college students) were best explained by HS GPA and MACH variances, but only for the first two years.
The “error term,” (i.e., the unexplained variance) was over 50% in the first year and about 75% in the second year. The tests have changed, but it does not appear that the basic direction of the evidence has changed. We need to understand better evidence which is not so readily put into a metric. Admissions is still an art. There is something about human nature that permits us to hide behind data in order to defend college admissions decisions. The ivies problem exemplifies this as students and parents chase their quantifiable tails.
This problem is not new and not likely to go away. It’s a little like too much sugar in your food. It might not be good for you, but it taste good, especially when you are really hungry. :bz