"No, the SAT is not Required." More Colleges Join Test-Optional Train

<p>

</p>

<p>You can choose to interpret it that way; but you would have to also assume decades of psychometric research do not exist. Even back in 1994 there was little disagreements among the experts:</p>

<p><a href=“http://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf”>http://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf&lt;/a&gt;&lt;/p&gt;

<p>As Kuncel puts it in the TED talk I posted earlier, the subject is getting boring; it is an open and close case now.</p>

<p>@2018RiceParent‌ I first came across this data almost a decade ago. It was first posted by theoretical physicist Steve Hsu. He was not yet then VP of MSU…a brilliant way of shutting someone up is to kick him upstairs.LOL</p>

<p>I have always been told that you need a score of 115 to truly grasp college material, so I don’t think 120 is unreasonable for the most g loaded of all disciplines. Perhaps you folks set a lower threshold in the US?</p>

<p>@Canuckguy‌ It’s not like the US still dominates a lot of those tough fields.</p>

<p>

The</a> quote you replied to related to the table about race, which you incorrectly used to claim test scores consistently predicted 1st year GPA. The only valid interpretation of the race table relates to correlations with race, such as test scores being correlated with the race of Duke students. In contrast, the regression coefficients I listed do show the relative contributions to a prediction of 1st year GPA, rather than race. However, the link you listed seems to talk about IQ. It sounds like at least this part is in reference to the IQ comment, which you did not quote. The link does not list anything to suggest a 120 IQ is required to truly understand math or economics, as you claimed earlier. </p>

<p>Most articles I’ve seen that talk about the lack of non-life science STEM college grads in the United States focus on the lack of female grads in the United States, rather than the total overall. This is understandable when you consider that the ASEE reports that only 18.4% of engineering BS grads are female. If you look at the graduate level, then the numbers drop even lower because international students often dominate. For example, the report at <a href=“http://www.nfap.com/pdf/New%20NFAP%20Policy%20Brief%20The%20Importance%20of%20International%20Students%20to%20America,%20July%202013.pdf”>http://www.nfap.com/pdf/New%20NFAP%20Policy%20Brief%20The%20Importance%20of%20International%20Students%20to%20America,%20July%202013.pdf&lt;/a&gt; mentions that more than 70% of graduate students in EE are international. Among my grad EE class at Stanford, I’d guess 80-90% were international (excluding SCPD students). I can only recall one woman in my grad class who did her undergrad in the US. If the primary issue with lack of non-life science STEM grads relates to not enough students having an IQ of 120+, then why are there so drastic differences in the numbers of male and female students? Females actual average a slightly higher IQ than males. Similarly why are international students so overrepresented at the grad level, when there are not drastic differences in IQ of the countries?</p>

<p>I spent most of the 90’s working for a test company, i.e. inside the “sausage factory”. I also have more than 20 years’ experience in test preparation, including SAT, ACT, GRE, GMAT, and LSAT. The SAT is very good at measuring what it measures, which is a narrow mix of cleverness and learning. The real question is whether SAT scores predict college success. Empirical studies at Bates College and other campuses have shown no difference in outcomes between test submitters and non-submitters. Other SLAC admissions officers have told me that beyond a certain point, probably 1700 - 1800, there are no differences in outcomes for admitted students. </p>

<p>Many years ago the College Board published a study showing a correlation coefficient of .35 between SAT scores and freshman grades. I never understood why they used the study to support the use of SAT’s. The coefficient of determination, r squared, was .12, meaning only 12 percent of the variation in freshman grades could be explained by SAT scores.</p>

<p>It’s well-known that the metric most highly correlated with SAT scores is family income. In the early days of testing this class bias was quite deliberate. The Ivies needed a tool to cope with a growing volume of applications from Jewish students. The SAT was adopted as an “objective” tool for restricting admission to mostly high-income white Christian men.</p>

<p>Over the last decade I’ve helped numerous low-income Hispanic students get to some of the top colleges in the country, including the Ivies and the most selective SLAC’s. With a single exception, all have graduated in four years or are on track to do so. Most are making A’s and are looking forward to graduate, law, or medical school. And nearly all had SAT’s in the bottom 25% of admitted students at their colleges.</p>

<p>Will I continue to help students raise their SAT scores? Of course, as long as colleges continue to use them as a criterion for admission. Do I mourn the decision of Bates, Bowdoin, Smith, and now Wesleyan to make score submission optional? Absolutely not. </p>

<br>

<br>

<p>With the general increase in female graduation and decline in graduation rates of males,
it is a little mysterious why the percentage of female Computer Science grads in particular has declined in the US, perhaps due to strong opportunities for women opening in other area of study and perhaps due to cultural factors, but certainly women have been increasing their domination of the faster growing fields for recent grads over the past ten years (even some of the higher paying fields). See the TED talks on “Demise of Guys” and Dr. Rosin’s on “The Rise of Women.” Even in some basic science areas: large majority of Biology related graduates are women, and about 1/2 of Chemistry and related majors. Men drop out of these difficult majors at a pretty high rate. The reasons for high achieving high math aptitude High School females (and even non-asians in some schools) choosing non-engineering and non-computer related fields seems to be cultural and there are lots of threads on that topic - but I have seen more balanced representation (relatively more females, more non-asians) from private schools in math contests around here - may be school specific culture involved. In any case female vs. male graduation rate differences by major are not likely to be strongly related to IQ or standardized testing but more likely cultural factors.</p>

<p>Raw data on college graduation by major by sex is at <a href=“Bachelor's, master's, and doctor's degrees conferred by postsecondary institutions, by sex of student and discipline division: 2011-12”>http://nces.ed.gov/programs/digest/d13/tables/dt13_318.30.asp&lt;/a&gt;&lt;/p&gt;

<p>Some of the majors which women now dominate (graduate at much higher rate) are surprising when you look at the detailed data.</p>

<p>@2018RiceParent‌ The decline of girls in CS can be explained pretty easily by looking at CS programs that are successful at attracting girls. All they have to change is making sure girls know about CS + evaluating applicants based on CS potential rather than CS experience, and voila- suddenly it turns out that there’s just as many good CS students for each gender.</p>

<p>

</p>

<p>Data, how do these numbers compare for those of submitters in the study? It could be that the percentages of non-submitters among STEM majors is much smaller than that of submitters, but without percentages for submitters it’s hard to compare the two groups. (Sorry to ask but I couldn’t locate the 10 percent/1 percent info. in the study-as you know it’s rather long!)</p>

<p>Less than half a percent of all students who received a bachelor’s degree in 2010-2011 received it in engineering.
<a href=“Fast Facts: Most popular majors (37)”>Fast Facts: Most popular majors (37);

<p>and only 7.4 percent of all student end up majoring in science.
<a href=“http://www.bloombergview.com/articles/2013-07-17/why-american-students-don-t-major-in-science”>http://www.bloombergview.com/articles/2013-07-17/why-american-students-don-t-major-in-science&lt;/a&gt;&lt;/p&gt;

<p>So my question is how different submitters and non-submitters are as to the percentage who end up with degrees in these fields.</p>

<p>I have heard that while women have a slightly higher iq on average, the standard deviation is much higher for men, meaning there are many more highly intelligent men than women (and many more really dumb men than women). On surface this fits with details somewhat, there are far more men in (thought to be) more challenging stem fields and on the other hand women have far higher high school and college graduation rates and are way overrepresented at colleges and perform better on average. I think cultural factors play a bigger role in both these phenomenons though.</p>

<p>

As I mentioned earlier, students who are interested in non-life science STEM majors are more likely to have high SAT scores than the overall population, and students who have high SAT scores are less likely be non-submitters. So I do not find it surprising that fewer non-submitters chose such majors. The report does not list specific numbers, but the graphs suggest somewhere between 3-4% of submitters chose physical science, and 3-4% chose computers & mathematics. Non-submitters appear to be slightly above 2% instead of 3-4%. In contrast ~37% of submitters and ~40% of non-submitters chose to major in humanities or social science. Both groups were tremendously more likely to major in humanities or social sciences than non-life science STEM. Note that I have said “non-life science STEM” throughout this thread because life science majors display different trends than the remainder of the STEM grouping. Women tend to be overrepresented in life science majors, and there is no shortage of graduates in such majors. </p>

<p>

The page you linked lists the number of doctoral students in engineering and the number of bachelor’s degrees overall. The number of doctoral students in engineering is indeed less than 1/2 of a percent of the number of bachelor degrees, but far more than 1% of bachelor’s degrees are in engineering fields. The earlier linked page at <a href=“Bachelor's, master's, and doctor's degrees conferred by postsecondary institutions, by sex of student and discipline division: 2011-12”>http://nces.ed.gov/programs/digest/d13/tables/dt13_318.30.asp&lt;/a&gt; mentions 5.5% of bachelor’s degrees were in their engineering major grouping. As discussed earlier, this discrepancy between the percent of engineering majors among submitters at test-optional colleges and among the NCES data likely relates to a bias in which colleges choose to go test optional. LACs seem to be overrepresented among test optional colleges, which are less likely to offer engineering majors. The LACs that do offer engineering tend to have few students who choose that major.</p>

<p>@dividerofzero Ouch! I must admit I was somewhat surprised that our students outperform yours in PISA even at the 90%tile range. I expect it at the 50%tile or the 25%tile, but the 90%tile area?</p>

<p>If the quality of students at the 90%tile range is different, I wonder how it affects the rigor of the university curriculum? For example, one of my kids was rejected by this program, the only rejection any of them had. Loves to know how it compares with yours (be sure to check the program requirements):</p>

<p><a href=“Welcome to Mathematics Business and Accounting Programs | Mathematics Business and Accounting Programs”>Welcome to Mathematics Business and Accounting Programs | Mathematics Business and Accounting Programs;

<p>@Data10 No. I know that test scores consistently predict first year GPA long before the Duke study was even done. It is also correlated to race as you suggested. These should be common knowledge.
Perhaps the elephant in the room is here. It should be common knowledge as well but apparently not:</p>

<p><a href=“http://www.psychologicalscience.org/media/releases/2004/pr040329.cfm”>http://www.psychologicalscience.org/media/releases/2004/pr040329.cfm&lt;/a&gt;&lt;/p&gt;

<p>“Standardized testing is IQ testing” is something that Kuncel was alluding to in the TED talk; they are interchangeable. That is why I posted the link. I can trust the data generated because the data are normed and have “passed the mustard” as far as reliability and validity are concerned; there is little chance for bias in measurement or judgement. Can I say the same for office admission evaluations at Duke? Not if I know anything about Philip Tetlock and his work on expert predictions. The part about “college counsellors” is absolutely gorgeous:</p>

<p><a href=“Everybody’S an Expert | The New Yorker”>Everybody’S an Expert | The New Yorker;

<p>So, not only are the Duke data not representative of college students in the US, we can expect personal evaluations to change with the evaluators. Why then are we even concern ourselves with regression coefficients of said data when the data are so “contaminated”? No amount of statistical manipulation is going to be able to clean that stuff up. If those other categories are so good, why am I not seeing them popping up in other studies with regularity? Besides cognitive ability, the only other variable that comes up regularly, in distant second, is conscientiousness. It correlates with job performance at around 0.2 to 0.4, in contrast with 0.5 or more for cognitive ability.</p>

<p>Just curious, are you the hedgehog and I am the fox? Or is it the other way around?</p>

<p>

</p>

<p>The question then becomes, is the current correlation due to test bias (e.g. the claim about SAT vocabulary words tend to include many which are more commonly used in life activities associated with high income), or due to widely varying environmental circumstances between high and low income families (e.g. quality of K-12 schools)?</p>

<br>

<br>

<p>Some correlation is expected - there are inheritable conditions that
affect both intelligence of the child and parental income earning
potential. And given the current percentage of poor already
excelling at the SAT and ACT (17% of highest 10% scorers on SAT
from the lowest quartile of income in the US) - it would be interesting
to see what the “expected” percentage of high scorers (top 10%) for
the lowest income quartile would be for a “perfect” test (20%? 24%). </p>

<p>Although there is significant value in the odd mosaic of evaluation
criteria used by colleges today - in research and in exploring new
and better metrics, there is a significant risk that
adding other more subjective metrics (other than grades) could make
it worse not better as in the admissions officer study referenced
in the New Yorker article link posted by @Canuckguy‌. The more
data, especially subjective data, given to the admissions officer
the worse the decision gets?! If they can’t pick the students
who are likely to be successful better with more data, do we
expect them to get their “diversity” choices right as well - which
is even harder to evaluate and even more subjective.</p>

<p>And also we have to be aware of which metrics are “fixable” without
making the situation worse. That there is a correlation with test scores
and parental income and also that there is a correlation between test scores
and grades, completion rates, and even “success” (defined in some arbitrary
way) is clear based on the many studies quoted in this thread -
but the correlation varies depending on whether we are talking about
the general population, the “college bound” population (those top 50
or 60%), or candidates for elite college (presumably top 10% or even higher).</p>

<p>Will the forthcoming “redesign” of the SAT help with the income/score gap
(<a href=“Home – SAT Suite of Assessments | College Board”>https://www.collegeboard.org/delivering-opportunity/sat/redesign&lt;/a&gt;)?</p>

<p>I am suspicious - the reading passages are already pretty well chosen.
For example the sample test that the College Board has posted on
their web site makes the test taker read sections from:

  • a 20th century novel
  • a high level layman’s overview paper on physics talking about relativity and quantum mechanics
  • a textbook passage about MLK’s “I have a dream speech”</p>

<p>(There is a similar diversity in the extended reading passes used
in the other recent SAT tests that I have seen). There are a
wide variety of college level passages that must be read and
understood to excel in the test. Given that the goal of SAT (and ACT),
at least in part, is to test “college readiness” and these are
college level reading passages, and are quite diverse,
seems pretty objective to me - they are clearly trying.<br>
Fixing it might even make it worse.</p>

<p>

The UC study mentioned earlier found a combination of SAT I scores and SES variables explained 13.4% of the variation in 1st year GPA. Mathteacher mentioned the same 13% of variation in her earlier post, referencing a study College Board published about the correlation between 1st year GPA. I certainly wouldn’t call explaining 13% of variation with the help of SES ratings “consistently predicting first year GPA.” </p>

<p>However, more important is what happens when you consider the remaining parts of the application. The Duke studies found that when you consider the remaining section of the application, the regression coefficients for test scores drop to a lower value than all or nearly all other of their evaluation ratings, implying that test scores add relatively little to the prediction of 1st year GPA or chance of switching out of a tough major beyond the information available in other other sections of the application, the ones a test optional college would use to evaluate applicants. The UC study found the following percentage of explained variation in college GPA explained with different models:</p>

<p>GPA + SES – 20.4% of variation explained
GPA + SES + SAT I – 24.7% of variation explained</p>

<p>Note that SAT only explained ~4% of variation in college GPA beyond a prediction based on only UW GPA and SES. The Duke study shows the regression for test scores drops gets far smaller when course rigor, LORs, and others are included in the prediction, suggesting SAT would improve the accuracy of the college GPA prediction by far less than the 4% reduction in the explanation of variation found in the UC study, had they also considered the remainder of the application. </p>

<p>So the studies suggest that removing SAT scores from your 1st year GPA prediction model means you the amount of variation you in 1st year you can explain drops by a small amount that is far less than 4% How can explaining such a miniscule portion of the variation in 1st GPA be called a consistent prediction? Also note that I reference 3 different studies that have been mentioned in this thread. It’s not just the Duke study that is showing relatively weak predictive ability.</p>

<p>

Most researchers do not have access to the internal ratings used by college adcoms when evaluating candidates. Instead they have access to numerical stats they do have access to such as GPA, SAT score, family income, etc; so studies usually focus on the available numerical criteria. However, as mentioned above, even just GPA and SES in the UC study was enough to show that SAT I added little to the predictive ability of college GPA beyond these 2 factors. </p>

<p>It might help to think about why we cannot explain the vast majority of variation in college GPA by looking at such stats. For example, the Duke study was only able to explain about 1/3 of the variation in college GPA. That’s notably better than the UC study, which probably relates to considering additional sections of the application instead of just GPA, test scores, and SES. However, the vast majority of the variation in GPA remained unexplained in both studies. For example, how do you predict if an accepted student has internalized reasons for achieving (and not drinking/partying/…), so he will continue to maintain a similar level of achievement after he leaves home and his parents are no longer forcing him to study and do assignments? None of the application criteria will predict this well, but I’d expect personal qualities to do a better job of it than test scores. </p>

<p>@Canuckguy‌ I don’t much care for that sort of contest right now (and yes I know we’re behind on international tests). But in the end, American scientists get Nobels- it’s just like how the nation could be so lazy and yet dominate the Olympics.</p>

<p>As for the UWaterloo program, since we’re talking about the best of the country- Penn has some really good joint majors (like the M&T Jerome Fisher program) in the same area. You could check those out.</p>

<p>If you want to make it personal for some unfathomable reason (after all, you made it a national dick-waving contest for no reason already), I’m doing CMU’s ECE program and will go for a double in Robotics (and maybe do CS+Robotics instead). Feel free to look up the requirements for those double major scenarios.</p>

<p>^Perhaps it is a difference in communication style, I thought I was asking a genuine question to your self-deprecating comment.</p>

<p>I posted the Waterloo program not as a flag-waving exercise, but to better understand why Americans, at least here in CC, have such distain for undergrad business programs. We don’t share this feeling up here and undergrad business is among the most competitive programs on offer. I was genuinely looking for some kind of “compare and contrast”.</p>

<p>I am bowing out. My apologies to anyone that I may have offended.</p>

<p>@Canuckguy‌ :frowning: My bad for misunderstanding you. I was just trying to contextualize it in terms of “the US has low standards in math and science” (which I would agree with as a whole, but not in the sense that we still don’t manage to dominate worldwide in many STEM fields). I’m also surprised by the 90th percentile issue, but then again high school education is weak here too- a good portion of graduates in college end up taking remedial courses (50% in 2-year ones, around a quarter in 4-year colleges). Then again, there are people coming in and basically skipping a year or two of college.</p>

<p>Undergrad Business is the most common major for Americans, actually. Some colleges (Harvard, Yale, the University of Pennsylvania) are actually not the best for STEM but have build their reputations based on non-STEM fields, particularly business (since Law and Medicine aren’t that huge on the undergrad level). Penn’s prestige comes mostly from Wharton, as most posters here would acknowledge. I’m surprised you would detect some disdain, but there might be an anti-business group on here.</p>

<p>As for compare and contrast- the program requirements look fairly tough but there are definitely rivals in the United States. I can’t really compare it to the whole of American higher education, but at CMU I’d say the main difference is the interdisciplinary stuff- like computational finance. We don’t really have a comparable program, though- to my knowledge. That’s more up Penn’s alley- but maybe I’m just wrong here because I’m majoring in ECE and don’t know much about Tepper, our B-school.</p>

<p>"Some colleges (Harvard, Yale, the University of Pennsylvania) are actually not the best for STEM but have build their reputations based on non-STEM fields, particularly business "</p>

<p>You think Yale’s reputation has something to do with the business field? I don’t agree. Humanities, yes, business, no.</p>

<p>Nor do I agree that Harvard is not the best for STEM in general – it’s not at the tippy top for engineering, but for math, physics, and chemistry, it’s in the running with tech peers.</p>

<p>With Video games and other addictions (at least for males) rising problems, and among the leading causes of college failure, maybe elite colleges will be forced to go to other tests to better predict college success and raise their stats:</p>

<p>1) tests which determine whether you are addicted to computer games (see <a href=“Students (and colleges) vulnerable to computer gaming addiction (essay)”>http://www.insidehighered.com/views/2012/12/13/students-and-colleges-vulnerable-computer-gaming-addiction-essay&lt;/a&gt; and multiple interesting references in the article which go through some of the unfortunate data on poor outcomes due to video game addiction)</p>

<p>2) Tests for other addictions: how various drug use patterns affect college outcomes are noted here: <a href=“Drug Use Patterns and Continuous Enrollment in College:Results From a Longitudinal Study - PMC”>http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3517265/&lt;/a&gt;&lt;/p&gt;

<p>I doubt that they correlate as well with SATs as with grades but perhaps they correlate with both</p>

<p>Perhaps some strange colleges will start to add urine or blood tests to the required admissions testing since ruling out (or early treating) drug problems would improve their graduation numbers. Who would have thought 20 years ago that these addictions would be such a big factor.</p>