In an earlier post, you said the lack of notable correlation between SAT scores and switching out of tough major in the Duke study related to the Duke admits being range bounded on the upper end, even though different groups in the Duke study had more than a more than a 150 point difference in average test scores. So >150 points is not enough to be notable in the Duke study, but 100-150 points is a lot and enough to have notable effects in other situations?</p>
<p>Answering your questions, students interested in majors for which applicants tend to have higher test scores are less likely to be non-submitters with lower test scores, so non-submitters are less likely to declare such majors. This should not be a surprise. However, in general a very small portion of non-international students pursue non-life science STEM majors, both among submitters and non-submitters. For example, the NACAC paper suggests fewer than 10% pursue non-life science STEM majors among submitters and only ~1% pursue engineering. </p>
<p>The Duke study suggests that among students who do declare such majors, all other admission rating categories besides personal qualities have a larger correlation with switching out of the tough major than test scores, including things like LORs and essays. So I’d expect little difference in rates of switching out of tough majors between submitters and non-submitters among colleges that do not have a very high admit rate and include a large portion of additional criteria in their admission criteria, such as course rigor, LORs, essays, etc. One might even find a smaller rate of switching out, if the non-submitters generally need to have a better remainder of the application than submitters to make up for their missing test scores.</p>
<p>@Data10 If you believe that HSGPA is correlated to SES then I am not sure what the problem is. We just have a different belief of the degree in this relationship. My gripe is with the UC study that effectively states that HSGPA and SES have a near ZERO correlation which defies conventional wisdom and common sense. </p>
<p>For you, you believe that admitting students based upon high HSGPA will positively affect low SES students in the admission process of test optional colleges, but I just want others to know that the test optional policy may achieve this result but that it may be at the cost to other low SES with low HSGPA but who manage to get High SAT scores. Someone has to get denied admission since there is only so many slots for a freshman class.</p>
<p>My point is that whether a college has a test optional or test required admission policy, it will result in some group that we want give an opportunity to attend a prestige college being adversely affected. I am more critical of those colleges that switch to test optional because they do so under the guise of social responsibility when it is a shameless attempt at increasing applications, prestige and rankings.</p>
<p>As I stated in prior posts, since colleges like Temple, Bowdoin and et. al. each has control over who gets admitted to their institutions, they can choose to whomever they wish into their colleges. They could all require SAT/ACT and weight them however they wish and enroll the very students that they claim to want.</p>
<p>If these schools genuinely wanted to enroll High HSGPA students with low SAT/ACT from low SES then they could do so whether the school is test optional or test required. The decision is in their own hands and has always been in their own hands but there is a price to pay for doing so.</p>
<p>For Instance, if Bowdoin wanted to attract High HSGPA students with low ACT scores coming from low SES households under a test required policy, they would need only to lower their middle 50% test score range from its current 31-33 to 23-33. The applications would be rolling in, but its prestige and rankings would plummet. </p>
<p>@voiceofreason66 interesting article but I think you got the description backwards accidentally (it actually supports your point) :)</p>
<br>
<br>
<p>The article shows clearly two of the points you have stressed
the SAT, although imperfect, has some value as a predictor of college success (and even beyond)
the SAT has some correlation with parental income (0.25) but much weaker than most people assume</p>
<p>I don’t think that there is anything counter-intuitive about either of these statements so I am not sure why they are so controversial.</p>
<p>The scandal is that more than 50,000 (!) low income (bottom 25% family income) kids who have aced the SAT and/or ACT graduate each year from High School (17% of the top achievers - much higher than many thought) and they are less than 1/2 as likely to succeed because the top schools where they would go for free can’t find them.</p>
I think I have been quite clear that my problem was with your earlier claim (and others) that HS GPA is MORE correlated with SES/income than test scores is correlated with SES/income, when all research I have seen suggests the opposite. The exact wording of the UC study is below:</p>
<p>“Among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures”</p>
<p>Note that they do not say that HS GPA has no correlation with SES. Instead they say that their research found that HS GPA was correlated with SES, but that correlation was far weaker than the correlation between SES and test scores. As we’ve seen in other linked studies, it is far from the only research that came to this conclusion. I expect that the degree of this relatively weak HS GPA-SES correlation primarily relates the measurement conditions of the research (for example, different degrees of correlation if you look at a particular HS than if you look at enrolled students at a particular college, but correlation is relatively weak under both conditions).</p>
<p>but that it may be at the cost to other low SES with low HSGPA but who manage to get High SAT scores Of course. As Data pointed out, there are multiple factors, not just stats. And why does an an argument based on low gpa make any kind of sense? That’s the week to week performance.</p>
<p>For Instance, if Bowdoin wanted to attract High HSGPA students with low ACT scores No, they don’t “want to attract students with low ACT scores.” They want to attract the kids who will fit and thrive, go on to some achievements- and they find standardized scores are not indicative enough to bow down to.</p>
<p>When all one can do is look to the hierarchy of stats (this kid will get a better college gpa, those kids may earn more in the first years out of college, this study of other aspects, that study of majors, etc,) you miss the value of the broader picture of education. </p>
<p>@adhdboarding I have seen and heard more asian stereotyping at the “pressure cooker public” high school and middle school here (stereotyping works both directions unfortunately to the detriment of non-Asians who might enjoy the Math team). Seems to be minimal racial stereotyping at the local Catholic High Schools here though (and top students are more mixed across ethnicities) and less social segregation. One key is encouraging a breadth of activities (the public school band and IB program are heavily asian - but the sports teams are not - maybe time to try other activities if Asians are pigeonholed at your school).</p>
<p>In any case - grades are useful, but good solid activities (whether sports, or research internships, or Quiz Bowl, or Robotics, or writing club or Jazz Band …) are almost as important since the kids will have to develop passions and drive on their own at some point. I would rather see a student with a B in AP Biology but passionate about Science competitions or a research project than one who got an A in AP Biology. The one who is passionate might even do better on the actual AP exam.</p>
<p>@lookingforward
I absolute agree - there is an important broader picture
(e.g. diversity can be a consideration too and clearly
colleges already look at exceptional success in
activities/music/sports as predictors of long term achievement),
but you can’t completely ignore the study of majors (STEM majors vs.
Art/Theatre/Music majors are different) when talking about
how to measure likely success and admit the right students
and help the students pick the best majors and career paths.
What I like about the SAT is that it nets out three basic skills
that are “broader picture” and tests very little else
critical reading skills
basic grammar/editing/writing
basic math literacy (Algebra 1/Geometry)</p>
<p>Probably more important for STEM than for Art majors, but
has some value to all. Obviously there are much better
predictors than the SAT if all you cared about was STEM
(perhaps certain AP tests and SAT2 tests or …)
but the SAT is cheap to administer, cheap to use in
admissions and cost is a HUGE factor in admissions.
St. John’s in NY got more than 50,000 (!) applications -
imagine how much it would cost to process 50,000
applications thoughtfully without some metrics like
standardized tests to help …</p>
<p>@Data10 mentioned “the NACAC paper suggests fewer than 10% pursue
non-life science STEM majors among submitters and only ~1% pursue engineering.”</p>
<p>Ouch. That is catastrophic. Very sad. That some test optional schools may
not be as worried about preparing more students for engineering,
STEM, or more broadly computer literacy, science literacy,
and math literacy could be a problem. But … if some of the test optional
schools believe that since they have lower percentage of engineering
and science majors that tests are far less useful than portfolios and essays
and interviews that is their choice - but I do worry that we need to
raise the bar a little higher for science/math/engineering and stretch
students to excel (High School Math expectations
in Europe and Australia are high, not just in Asia)</p>
<p>Have there been any worthwhile studies that tested how predictive a SES-adjusted SAT score compared to other SES-adjusted factors? Perhaps a rich 2400 doesn’t mean more than a poor 1700 but maybe adjusting it to determine percentile rankings within socioeconomic strata would be a good way to test predictivity- is someone who scores 10 percentile rankings above the average member of their socioeconomic strata more likely to do well in college? That would be worth testing to sidestep the typical “SAT score = money” response and determine the predictive value of test scores once you adjust for the most glaring issue.</p>
<p>No, that was not what I said. Low correlation is the result if the sample is not representative of the general population as a whole. It can be upper end, lower end, a sample of any occupational group, people living in a certain city etc. If Duke were to accept students randomly from all high school grads in America (lottery), the correlation would be a lot higher. That was all I was saying; nothing more and nothing less.
I never said a 150 point difference is not “notable”. Look at this article I posted on numerous occasions (the graph in the middle of the page):</p>
<p>Observe the “accuracy” of the test scores, essay, and achievement in consistently predicting first year performance, whereas family income, curriculum, and personal qualities did not do a very consistent job of it. In short, there is no “false positive/negative” with test scores. Why would I say something that denies the facts?</p>
<p>
</p>
<p>While I do not agree with much of your position, this part I agree with. Have you ever wonder why? The best answer I know comes from Charles Murray (Coming Apart). In one of the three Op-Eds written some years ago for the WSJ, he said the following: </p>
<p>In engineering and most of the natural sciences, the demarcation between high-school material and college-level material is brutally obvious. If you cannot handle the math, you cannot pass the courses. In the humanities and social sciences, the demarcation is fuzzier. It is possible for someone with an IQ of 100 to sit in the lectures of Economics 1, read the textbook, and write answers in an examination book. But students who cannot follow complex arguments accurately are not really learning economics. They are taking away a mishmash of half-understood information and outright misunderstandings that probably leave them under the illusion that they know something they do not. (A depressing research literature documents one’s inability to recognize one’s own incompetence.) Traditionally and properly understood, a four-year college education teaches advanced analytic skills and information at a level that exceeds the intellectual capacity of most people.</p>
The actual regression coefficient values from the study you referenced are below, ranked from most predictive of first year GPA to least predictive. </p>
<p>Note that curriculum did NOT add little to the prediction of freshman GPA. Instead it had a far greater regression coefficient than test scores. Personal qualities also had a greater regression coefficient than test scores. Your referenced study doesn’t seem to show curriculum and personal qualities do not do a good a job of predicting 1st year GPA as test scores, as you claim.</p>
<p>It is also important to note that even with all of these application criteria + race + gender + student’s prediction of GPA, the author was only able to explain less than 1/3 of the variation 1st year GPA. Of the less than 1/3 of variation in 1st year GPA that he could predict via the criteria above, test scores only added a small amount beyond what could predicted by the combined remaining areas of the application. </p>
<p>
To start with, we need to consider which colleges choose to go test optional. LACs are overrepresented, which tend to have fewer students who choose to major in to non-life science STEM fields, especially engineering. Colleges that have not chosen to go test optional as a whole also show relatively few non-International students in non-life science STEM fields, but the numbers aren’t quite as low, particularly for engineering. Many have different opinions of why relatively few US students choose to go into non-life science STEM fields, or even if this is a problem. I think this is a complex question that has many components that go far beyond “intellectual capacity.” I’d expect the largest contribution is environmental, including both primary schools and role models.</p>
<p>Would you like to tell the people who manage the lists at <a href=“http://doorways.ucop.edu”>http://doorways.ucop.edu</a> how to certify honors courses? Note that not all high school designated honors courses are honors for UC/CSU application purposes.</p>
<p>
</p>
<p>The district was the Houston Independent School District, which is in Texas, not California.</p>
<br>
I agree with you this is far from a simple model. What this also means then is that the interpretation of the regression coefficients can be tricky and difficult. My solution is to simply interpret the results as a function of meta-analysis done by distinguished scholars such as Kuncel. I don’t bother sweating the details. So why should institutions spend so much time and money going through the rest of the application when a simple test and GPA would do? There has to be a political agenda in there somewhere.
Using this approach, a quick perusal of the data tells me that Latinos took a curriculum as rigorous as whites in HS(I don’t believe it) that somehow did not translate into first year grades (I doubt they took harder courses) in college. In the same vein, personal qualities seemed to get it all wrong in rank ordering the four groups, as far as first year grades are concerned. These two categories lack consistency; whatever the regression coefficient, they fail to do the job.
After reading the Epstein article, it is pretty obvious the test optional colleges are gaming the system. The non-submitters are doing likewise; they are trying to get into a better school than they otherwise can. Nothing wrong in that, of course.
I have to disagree with your second point. To truly understand subjects like math, physics, econometrics etc. I think you need an IQ of 120 or more. That would be top 10% of population, neatly dovetail with the NACAC paper you mentioned earlier. We don’t live in Lake Woebegone, and role models and the like can only make a difference at the periphery.<br>
On a more personal note: What I would pay to understand theoretical physics… I wasn’t even close. I walked away- no shame no blame.</p>
The summary chart you are using to estimate “consistency” lists admissions criteria and 1st year GPA by races at Duke. It gives no significant information about how 1st year GPA correlates with the different components of admissions criteria. Instead it only gives information on how admissions criteria and 1st year GPA correlate with race. That’s nice if we want to predict students races, but it’s not as helpful if we want to predict students’ 1st year GPA. For example, suppose personal qualities are useful in predicting 1st year GPA, but not in predicting race. Then we’d expect to see little difference in rankings of personal qualities between different races in the chart since personal qualities are not good at predicting race, but we’d still expect to see a difference in 1st year GPA between races because some of the other admission criteria is correlated with both race and 1st year GPA. In short, personal qualities do indeed appear to be less predictive of Duke students’ races than test scores are predictive of Duke students’ races, but that does not mean one is more predictive of 1st year GPA than the other. </p>
<p>A better prediction would involve looking at what components of the application are most influential in predicting 1st year GPA among students of the same race, which relates to the regression coefficients I listed. They show how much weighting each component has in the best prediction of 1st year GPA, using all of the listed application components and race+gender, rather than looking at individual components alone without considering the remainder of the application. This is quite relevant to considering how the 1st year GPA prediction would be influenced if a college goes test optional and starts admitting a group who is weaker in test scores, but likely stronger in other areas of the application (stronger to make up for the lack of test scores).</p>
<p>
Several graphs of IQ by occupation appear in the study at <a href=“Center for Demography and Ecology – UW–Madison”>Center for Demography and Ecology – UW–Madison; . Note that approximately 3/4 of people working in natural science + math had an IQ of under 120. The same pattern occurred among electrical engineers. ~3/4 of electrical engineers had an I/Q of under 120. Even among college professors, the vast majority had an IQ of under 120. Most persons working in STEM fields appear to have an I/Q below your threshold. Are they all unable to truly understand math and physics? Also note the difference between correlation and causation. Persons who are good at math may be more likely to choose to work in mathematics and may be likely to have a higher IQ, but that does not mean a high IQ is required to understand math.</p>
<p>@Data10 Any IQ threshold is bunk. It’s tough to demarcate people using a number that only accounts for 50% of the variation in intelligence, let alone important factors like hard work.</p>
<p>@Canuckguy
I think that is fair, but perhaps a bit high, especially for some
more applied (rather than theoretical) STEM fields and for Life Sciences.</p>
<p>In addition training for various medical fields (including some types of Nursing
and various specialized medical technicians), although not strictly “STEM” (by
the NSF definition) have a strong science component and could be aggregated.</p>
<p>Found a strange sounding survey from Educational Testing Services
of “estimated IQ by graduate school major” (based on GRE exam scores
over the past three years) which would seem to confirm your assumption
(and mine as well) that Physics majors have a higher IQ than most other
majors.</p>
Note that they say “IQ estimates”, not measured values – estimates. The page does not list any kind of methodology for these “estimates”. Perhaps the author simply chose the numbers that fit with his perception of the typical IQ for that major, based on the listed average test scores, sort of like the websites that estimate IQs of historical figures, sometimes estimating a particular poet they like had an IQ of over 200, or similar. It certainly does not provide any support for a minimum IQ of 120 required to understand math, physics, or economics.</p>
<p>Here is an article written by a former Yale Professor that is interesting and thought provoking. I think it sums up much of what we have been discussing.</p>