Has Yale released SCEA Class of 2015 applicant numbers yet?

<p>@mifune: No, I was curious what an adcom might think looking at my scores. My scores were in the 700s for the SAT IIs that I took, but I got 5s on the AP. Do the two somehow negate each other? Or is one generally considered harder? I dunno.</p>

<p>Actually, a 2400 is problematic because it contains the stupid writing sample. Everyone with any sophistication who has looked into the scoring of those has been pretty horrified, and I don’t think the academic world puts much trust in that test. My kids’ college actually strips it out of the data that the admissions officers consider – they don’t even see it.</p>

<p>On an individual test, an 800 is certainly significant compared to a 600. Whether it’s significant compared to a 750 is a lot more open to question. Of course, the elite college data to which mifune refers suggests that it is. Even if you don’t (as I don’t) think that the colleges are basing their decisions on SAT scores much, one way or another they wind up choosing a higher percentage of people with 800s than of people with 750s, which at the very least suggests that the SAT is measuring something the colleges care about, with some degree of accuracy. </p>

<p>And with 800s, there is a long-tail effect. An 800 isn’t “perfect”, and the test isn’t that refined. On a more sensitive test, a bunch of the people who score 800s would get 810, or 850, or even 900. There IS clearly some meaningful difference between the 900 guys and the 750 guys, and that’s going to skew the data on 800-scorers. The number of people who get 800s on a test far exceeds the number who get 790 or 780, suggesting that half or more of 800-scorers are really super-800-scorers.</p>

<p>Is it valid to use data from 2005? I understand the number of applicants with SAT 2400 has increased dramatically over the last few years to about 400 or so from less than 100. SAT 2400 in 2005 may have a greater value in admissions than it in now.</p>

<p>I agree that SAT 800 is different from 750, maybe comparable from A to A-. It may get complicated if students who got 750 may be able to bring it up to 800 by studying for the test. If true, how do universities distinguish a natural 800 from a manufactured 800? Or is the manufactured 800 as good as a natural one?</p>

<p>To add to Iglooo’s point, what about kids who take expensive SAT/ACT prep courses? </p>

<p>And is the ACT generally viewed the same as the SAT?</p>

<p>^^Colleges have no way to know who has prepped for the test and who hasn’t. They do know, from things like your zip code, the activities you’ve participated in, and the quality of your high school, the kinds of opportunities you’ve had. Context is relevant in evaluating applicants.</p>

<p>Yes, the ACT is on equal footing with the SAT.</p>

<p>Igloo: The actual data from the 2005 study came from the late 90s, but I don’t think anyone would see a lot of difference now if you were looking at the kinds of things mifune was talking about. Admissions staffs weren’t relying more on SATs back then than they are now; if anything, I suspect it’s the opposite (and remember, I am the one arguing that they aren’t used as much as mifune thinks). </p>

<p>There’s a baseline on which mifune and I could certainly agree (or at least I think that’s the case): Standardized test scores (SAT and ACT) have predictive value for college admissions, and no other easily observable data have more predictive value. </p>

<p>As for the questions around test prep: The simplest answer is that the admissions staffs don’t seem to care much. There are several possible reasons for that, some more cynical than others, but they include -</p>

<ul>
<li><p>Even if the admissions staff doesn’t make decisions based on SATs, once the decisions are made the university wants “credit” for the highest SATs possible. Hence “superscoring” and the like: The university wants to say, “This is the data we use, so this is the data we present. If it happens to skew high systematically, so be it.”</p></li>
<li><p>Rich people buy more SAT prep than poor people. Universities may claim to be need-blind, but none of them objects to admitting rich people, and they do it consistently. An admissions policy that favors SAT prep is one of the ways to keep your student body affluent without being so crass as to say you want to do that.</p></li>
<li><p>As mifune says, the data suggest that big improvements in SAT scores through coaching are rare. The universities claim that they don’t put a lot of weight on the kinds of differences in SAT scores that people do achieve with coaching, so why worry about it? No big deal.</p></li>
<li><p>The “A” in SAT used to stand for Aptitude, but it doesn’t anymore. It’s Assessment. The difference is relevant here. If the SAT were measuring aptitude, then improvements from coaching would make it unreliable. But if the SAT is measuring skills . . . then coaching is fine. We don’t penalize people who take year-long Physics classes before taking the Physics SAT II or AP. So why would we penalize someone who takes a class in logical thinking, study skills, and self-assessment and then takes a test that measures those things. If a kid learns how to do that stuff better, he will be a better student in college.</p></li>
<li><p>One of the issues with the SAT (and ACT, too) is that it doesn’t actually do a great job of predicting success in college, although combined with high school GPA it does an OK job (and better than any other easily available data). But none of the research supporting that fundamental value of the SAT distinguishes between coached and uncoached scores. So what basis does a college have for distinguishing between them?</p></li>
</ul>

<p>

</p>

<p>Of course you can’t define every person by a test score. But generally, 2400 scorers and 2000- scorers are going to be very different students.</p>

<p>I feel that I can add an interesting perspective to the “expensive prep class” discussion. In the Czech Republic (Europe), the admission test used by universities, “Scio”, is basically a rip-off of the pre-2005 SAT. The company that does it offers official test prep; it also likes to do a lot of questionnaires. One of the conclusions of one of their studies was basically this: while students who took their prep classes generally improved in performance, only an extreme minority of students with performance above 90th percentile took the class. While that is not necessarily directly relevant, it corroborates the view that with admission tests, “innate ability” is not “coachable” beyond a certain limit, or that “expensive prep classes” only have limited impact.</p>

<p>Not that it would be relevant to this thread, since it is already tangential to the discussion which still isn’t relevant to the thread.</p>

<p>Does a large proportion of students do the test prep though? I mean, if .01% of the population of students does it, you’d definitely see a minuscule portion of 90+ percentile kids having taken it.
A better measure would probably be to compare the proportion of test prep takers getting 90+ percentiles verses the ones getting lower, to see if if gets most kids that use it over that bar, if thats available?</p>

<p>

Essentially the entire CC community and many high-scoring friends/acquaintances I have in real life all agree that these “expensive SAT/ACT prep courses” are essentially useless. CollegeBoard has done extensive research on this topic as well and their findings show that extreme improvements are so rare that they are basically negligible when considering the effectiveness of test prep classes as a whole.</p>

<p>How do you then explain the increasing number of kids who get 2400?</p>

<p>^ Are you really attributing that to test prep classes…?</p>

<p>Either that or studying harder for the test. What else? Kids got smarter all of sudden?</p>

<p>Better education system?</p>

<p>Why should the SAT be dismissed because you can prep for it? I’ve always viewed diligence as a cousin of intelligence. It takes skill to prepare for a test, and for the SAT like any other test. This may not be the “raw intelligence” factor that some allude to, but in college I do not doubt the ability to practice and study for a test is just as valuable, if not more so, than smarts (however vague this is). The reason why a student with a high SAT score tends to have a good GPA, academic honors, ecs, etc. is not because those factors are inherently related (in other words, because preparation for one influences preparation for another) but because students who have the ambition to excel in one of those areas usually want to excel in all of them.</p>

<p>And like many people have said, the effectiveness of prep courses is questionable. But if a student can improve his/her score by identifying his/her weaknesses (through either practice tests or the actual SAT) and tackling them, then I say that is an admirable quality, and a type of “aptitude” in itself.</p>

<p>

</p>

<p>A score of 2400 will be viewed in a more favorable light than a 2100 for purposes of evaluating that portion of the application. When two SAT scores diverge by three hundred points, that is frequently an indication that the students representing them are of two entirely separate calibers. There’s no reason to assume otherwise. </p>

<p>Nonetheless, compensations will be made to admit students with notably lower scores if there is sufficient reason to do so. (“Reason” should be viewed in a loose context, as it frequently has little to do with meritocratic admissions procedures. But I won’t delve into this.) Universities like Harvard, Yale, and Princeton certainly desire to be represented with high 25th and 75th standardized testing percentiles, but not at the exclusion of ethnic diversity, athletic competitiveness, alumni/benefactor appeasement, and so on.</p>

<p>

</p>

<p>AP exams naturally utilize a less precise scoring scale. On many assessments, a grade of “5” corresponds to a scoring window of a forty percent margin. </p>

<p>

</p>

<p>Yes, perfect scores are more prevalent than slightly lower scores. But a main contributor is that there is simply a greater number of raw combinations to obtain perfect scores, which notably offsets the relative difficulty of achieving them. For instance, if the essay score is factored, there are often twenty-three unique raw score combinations that may correspond to a 2400 on the SAT*, while there are comparatively fewer that equate to a 2370, 2380, or 2390 ([Source](<a href=“http://professionals.collegeboard.com/profdownload/sat-percentile-ranks-composite-cr-m-w-2010.pdf]Source[/url]”>Higher Education Professionals | College Board)</a>). That supplies the predominant reason why such scores are less prevalent than those of 2400. An actual perfect score (perfect raw score on all three sections and perfect essay) is a feat likely achieved by fewer than forty students each year, even including super-scored “true-perfects.”</p>

<ul>
<li>65, 66, or 67 raw score on the Critical Reading in combination with a 12/80, 12/78, 12/75, 11/80, 11/78, 10/80, 10/78 (occasionally), or 9/80 essay/raw score composite on the Writing, and a 54 (or ever so occasionally, a 53) on the Math</li>
</ul>

<p>

</p>

<p>There isn’t any conceivable reason why that would be the case. </p>

<p>

</p>

<p>They don’t and they have no basis of differentiating. I think JHS’ analogy is worth repeating:</p>

<p>”We don’t penalize people who take year-long Physics classes before taking the Physics SAT II or AP. So why would we penalize someone who takes a class in logical thinking, study skills, and self-assessment and then takes a test that measures those things. If a kid learns how to do that stuff better, he will be a better student in college.”</p>

<p>There’s the common cynical notion that any individual may obtain a perfect score through completion of an ample supply of practice tests or instruction from expensive preparatory courses or a private tutor for a sufficient length of time. In that opinion, the test is so vulnerable to exploitation by professionally conditioned minds as to spoil its theoretical justification. But that isn’t true. The practice of benefitting from extortionately-priced preparation certainly doesn’t seem to be a prevalent circumstance among those who are actually achieving at a phenomenal level. </p>

<p>Nor are the gains that appreciable for those who do travel that route. A common subterfuge of test preparation companies is to employ practice exams that exceed the standardized difficulty of the actual test in order to assess the “baseline” score. When students achieve more favorably on the actual exam, the immediate presumption is to attribute such improvement to the company’s training procedures. In actuality, its often nothing more than the subtle business tactic of administering score-deflated practice material to convey the illusion of stimulated improvement. Thus, they can bluster about how much they ostensibly raise students’ scores. That, in turn, impels the consumption of tutoring and siphons thousands of dollars from parents who are sitting ducks to this deceit. In reality, the elevation is ordinarily less than thirty points on the SAT and less than one point on the ACT. Tutoring and preparatory classes are far less of a confounding factor influencing scores than commonly contrived.</p>

<p>

</p>

<p>The SAT saw an increase of 85 perfect scores among the class of 2010 over the class of 2009. The increased number of takings accounts for approximately one-sixth of those. Naturally there is some inherent level of statistical noise in the data that may or may not be of consequence to the others. Moreover, accounting for a fluctuation of 72 (a figure that accounts for the increase in takings) at a particular distinction when the sample size is 1.6 million doesn’t deftly lend itself to productive dissection. In terms of raw decimals, it’s .000194 versus .000239. Quite frankly, that’s not a substantial difference and perhaps not greatly symptomatic of an underlying social trend – or at least not to the purported extent. In total, scores at 2300 or above increased at just under 8%, which is perhaps a more relevant finding by virtue of inclusion of a greater sub-section of the data. To appraise whether this is covariant with a greater emphasis on challenging school curriculums, or whether precise or de facto preparation drove the score “upsurge” at the uppermost percentiles (99.57 to 99.98, to be exact) is a subject of waffling. </p>

<p>Nevertheless, with all the focused attention on a particular tally, note that aggregate scores didn’t increase. They remained precisely equal to those of last year. (Females gained one point in Mathematics and lost a point in Writing, while males remained uniform across the board.) Scores have actually decreased nine points from 2005, when the Writing section was first instituted. </p>

<p><a href=“http://professionals.collegeboard.com/profdownload/2010-total-group-profile-report-cbs.pdf[/url]”>Higher Education Professionals | College Board;

<p><a href=“http://professionals.collegeboard.com/profdownload/sat-percentile-ranks-composite-cr-m-w-2010.pdf[/url]”>Higher Education Professionals | College Board;

<p><a href=“http://professionals.collegeboard.com/profdownload/sat_percentile_ranks_composite_cr_m_w.pdf[/url]”>Higher Education Professionals | College Board;

<p>

</p>

<p>If so, why didn’t the average increase?</p>

<p>

</p>

<p>True or not, the underlying assumption about preparing for SAT is that you learn test taking skills more than learning the content. Taking Physics for a year before taking physics test is legimate but taking prep course exclusively to learn what problems are more likely to appear and how best answer if you don’t know the content is not.</p>

<p>The number who score 2400 has been increasing steadily over the course of years. That cannot be attributed to sataistical fluctuation. I remember reading on the newspaper 10-15 years ago about someone who got 2400. I don’t think those 384 who scored 2400 in 2010 or 292 who scored in 2009 appeared on the newspaper anywhere.</p>

<p>@mifune - you clearly know a lot about all this and have a veritable abundance of statistics to support all your claims. However, you rely way too heavily on the statistics you extoll, and ignore everything else. Which, in an area as dubious and non concrete as admissions, I would suggest isn’t a great idea.</p>

<p>To quote someone (and I think it’s possibly a long dead PM), “there are three types of lie: lies, damned lies and statistics.” Point being, if you’ve got enough data and a statistics text book then you can pretty much make the data say anything you want. If I gave all the data you’ve quoted to someone with the time and inclination (I’m not that person) then there’s no doubt that they could create a 100% irrefutable study to show either or all of the following: lower test scores are better, higher test scores are better, any test score except 2250 is best, etc., etc… Statistics can generally be made to support the view of the person employing them. </p>

<p>Be wary.</p>

<p>Just to clarify, I’d like to make a distiction between the type of learning JHS mentions;Take a Physics course for a year and take the test. That is education, you learn, you own the knowledge and hopefully expand. You learn and you grow. It will be a foundation to build upon in college. A legimate tool for college admissions. Prep courses as I understand it is not necessarily to strengthen the knowledge. Rather take what you know and train and focus it to get the most points out of a test. If successful, you may get better points in SAT but it hardly translates into success in college since you will face different tests.</p>

<p>I thought that’s why colleges are moving away from SAT heavy admissions. I am not talking about no score admissions. At Yale for example, they emphasize curricula and GPA.</p>