How Harvard and Yale cook the books -- Read at your own peril!

Regarding analogies, the exam that 8th graders take when applying to prep schools, the SSAT (Secondary School Admissions Test) is almost exactly like the old two part SAT. The first verbal section consists of thirty analogies and thirty synonyms to be completed in thirty minutes, or thirty seconds per question.

As epiphany noted, it tests vocabulary in a much more fluid way than do sentence completions. Secondary and tertiary definitions often come up, especially in the synonyms section. The correct synonym for ebb, for instance, may be flag.

The analogies aren’t coming back. They performed the intended function too effectively.

Indeed, dadx.

I don’t quite get the complaints about the supposed dumbing down of the SAT. A 700 on CR is the 95th percentile, and the 99th percentile starts at 760. If the elimination of analogies makes the CR section easier for everybody who takes the test, does it really matter as long as the results are still significantly curved?

…because if less comprehensive measurements are involved, then the test as a sifting mechanism is not as valuable, that’s why – regardless of curve.

Hmmm. I guess I buy the argument, if it is that the old test was a better predictor of college success. But it seems to me that the new test can also be used to sift students, since they are still distributed over the curve.

well, I believe that a combo of the previous test (with analogies) and the upcoming test (which includes an evidence-based essay) would be a better measurement tool, but no one’s asking me. :smiley:

The one thing better about the new test is the essay portion.

…Now, if the scoring of that essay were only less forgiving than currently… (A separate issue, obviously)

It matters in comparison to the past. An 800 CR today could be the equivalent of a 730 before recentering.

A 700 CR today is the equivalent of a 640 before recentering. http://research.collegeboard.org/programs/sat/data/equivalence/sat-individual

If you follow the link, you’ll see that recentering did not change all scores to the same degree.

For most colleges and universities, it would not matter. For the few colleges like Harvard, it does matter. The recentering “squished” the ends. https://research.collegeboard.org/sites/default/files/publications/2012/7/researchinreview-1999-5-effects-sat-scale-recentering-percentiles.pdf Instead of a few people scoring above 700, we now have many more people scoring above 700, and thus potentially admissible.

The Ivies seem to like holistic admissions; they may not want clearer distinctions drawn between the kids scoring in the top 5% of the applicant pool.

" If the elimination of analogies makes the CR section easier for everybody who takes the test, does it really matter as long as the results are still significantly curved? " Not for the average student who falls well within the curve, but I think it does on the high end of the test. There isn’t much room for error. Students who miss one or two questions are already losing quite a few points. We all know this could easily be due to test taking issues (fatigue, misreading, test anxiety, suboptimal time allocation) rather than not actually knowing or understanding the material. So I would say anything which makes the test easier is not helping selective colleges evaluate their candidates.

I don’t think the squishing of top scores matters one bit for Harvard. It only matters for people who would like Harvard to care more about scores than it does.

We’ve had long discussions before about whether the current standardized tests are good enough to enable MIT to separate the real math geniuses from the merely good or well-prepped math student. I’ve always been very skeptical about the idea that MIT is having any trouble at all telling which students are the really smart ones. It think the same is true about Harvard.

The argument might have a bit more bite further down the selectivity curve, especially if there are some colleges that really rely a lot on relative scores.

The upper tail was longer on the old cr section curve.

“I don’t think the squishing of top scores matters one bit for Harvard. It only matters for people who would like Harvard to care more about scores than it does.”

Bingo. If Harvard really cared to parse out at this level, they’d administer their own test.

I am from a place where every college administered their own on-site tests, actually 4 of them. Different majors had different tests and top programs had more difficult tests than the rest. Then they would add GPA as 20% of the total score.
And you know what - they still had Affirmative Action and they accepted kids with special accomplishments without any tests.

MIT asks for AIME scores, don’t they?
“I don’t think the squishing of top scores matters one bit for Harvard.” Perhaps. I seem to recall data on the Princeton admissions site that suggests that students with the top scores do much better in admissions than those with scores commonly cited on CC as being absolutely equivalent in the eyes of admissions. Perhaps the ability to bubble for 6 hours without making a mistake or two correlates well with excellence in many other areas? Or perhaps they care more than they would like to admit. The students I know who got perfect scores are certainly very impressive. But I would not be able to distinguish them from other outstanding students who didn’t get perfect scores.

Regarding the comments about how MIT interprets high end scores, MIT’s website has a lot of information from admissions officers on this topic. Some quotes are below:

I actually knew an IMO medalist who would probably never get 800 on a SAT math test. Solving simple problems in a rapid pace was not his forte. He had issues with regular HS tests. However if you locked him in a room for a day with a very difficult problem he would solve it. He was wired differently. He was accepted to a top math program.

@canuckguy:

No reason to be depressed, there is reason for hope.

Standardized testing has been a subject of debate within academia for decades.

You just happened to pick a TED talk from one side of the debate.

I suggest that people go back and watch it considering that the correllation coefffient (r) on the x axis of the graphs can range from -1 to 1. A rule of thumb is that .7 represents a strong positive correlation, .5 represents a moderate positive correlation and .3 represents a weak positive correlation.

Look at how the presenter scaled the x axis and also listen very carefully to how he describes the results. Then draw your own conclusions. My conclusion is that the tests are not very good predictors of anything.

This should not be surprising, as predicting the lifetime potential of an 18 year old in any endeavor (whether it be academics, sports or whatever) is really, really hard. (The TED presenter jokingly called it job security).

Here is a TED talk from the other side.

https://www.youtube.com/watch?v=otlmKZeNi-U

What I find interesting is that when the results from the Tufts admissions experiment were presented to the academic community, Stanford was interested, but Harvard was not. I suspect that every highly selective school has their own recipe for predicting success, they are all a little different, and they tend to reflect/propagate the culture of the school. Some schools put a higher value on the SAT while some schools put a higher value on other attributes that happen to correlate (to varying degrees) with the SAT.

Note that the “upper 1% testing experiment” in the first TED video came out of Vanderbilt, which offers merit scholarships and has a reputation for liking applicants with very high SATs…

@canuckguy:

The Scientific American article you referenced is circa 1998. There have been major advances in genetics and neuro/cognitive/brain science since then. Scientists from these academic camps are challenging some of the traditional beliefs promoted by some of the test development psychologists. Here is an article circa 2009/2011.

http://www.huffingtonpost.com/dan-agin/black-and-white-in-americ_b_160704.html

I think @xiggi is going to find this recent Berkeley study interesting. It suggests a new twist in the debate around the value of standardized test prep - at least for the LSAT.

http://newscenter.berkeley.edu/2012/08/22/intense-prep-for-law-school-admissions-test-alters-brain-structure/

Sternberg earned his Phd at Stanford, then moved to Yale where he performed the Rainbow admissions project, then on to Tufts where he performed the Kaleidoscope admissions project, then on to Oklahoma State and the Panorama admissions project. Last time I checked, the Stanford essays seemed similar to Tufts, but I have never looked at Yale’s. Note that teaching methods you use need to be in synch with the type of student you admit.

http://www.psychologicalscience.org/index.php/publications/observer/2012/february-12/why-i-became-an-administrator-and-why-you-might-become-one-too.html

If you are interested in gory details…

http://www.nacacnet.org/events/2012/session-archives/Documents/2009%20Documents/G714-combined.pdf

I also know some exceptionally mathematically talented people, some of them STEM faculty members at leading universities, whose forte is not solving simple problems quickly and with 100% accuracy for hours on end.

But of course kids whose interest and academic strength is solving math problems, who have spent far, far more hours on this for years than kids who are simply prepping for the SAT, who are successful enough at the lower level competitions which do require speed and accuracy to get to the IMO level, are on average going to outperform the typical excellent student who spends their EC hours on other things. That is a completely cherry-picked example.

What about all the million other EC’s kids could have? Robotics champ? Science fair biology winner? Groundbreaking documentary? Published author? Famous performer? Are they necessarily going to have 2400’s because they have exceptional EC’s?