<p>But I'm always amazed at how little yield changes over the years.</p>
<p>Also, I would have thought yield was hard to manipulate, since the decision is in the hands of the acceptees, not the colleges. I suppose a college can reject students it feels will reject them (that's the Tufts disease, no?), but that is a very problematic approach that cuts against institutional goals of getting the best possible students.</p>
<p>Who are the rankings intended for? If the rankings focus on SAT ranges, graduation rates and so on, then obviously students and parents should be interested. But I've never understood the value of yield for them. However, yield is important to colleges.
As Coureur observed, yield as defined by USN&WR was easily manipulated--by flooding students with application materials and getting back huge numbers of applications then turning down an increasingly high proportion of students; and also by applying the Tufts syndrome mentality: turning down students with high scores who, it was suspected, would not matriculate anyway. This had the effect of making the college even more desirable by seeming more difficult to get into.</p>
<p>The new system is offered not as a total replacement for the USN&WR rankings but for the component dealing with yield (which USN&WR is ditching). It's true that it is could lead to a vicious circle in terms of student behavior--students choosing a college because it is popular, thereby making it even more popular. </p>
<p>I think the new ranking should be of interest to some colleges as they contemplate how to build their classes, attract students, put together financial aid packages, but of little, if any, to students and parents. The other information provided in the USN&WR rankings is far more useful.</p>
<p>Marite, I don't see how increasing the number of applications through marketing affects yield--that's selectivity, which is still part of the USNWR equation. Also, I don't fully buy into the various comments that yield=popularity, unless popularity means with a select, target group.</p>
<p>Idler:
You're right. I used yield when I meant selectivity. Yield is important to colleges, but selectivity is simply a marketing device. </p>
<p>In the case of the study being discussed, since the sample of colleges is select and targeted, it does seem that yield= popularity. The authors claim to have accounted for financial considerations.</p>
<p>My understanding is that the most important way colleges manipulate yield is with Early Decision programs. As long as USNWR included yield in its ranking, colleges had a great incentive to lock in students early. Penn, Columbia, and Princeton, in particular, take a large chunk of the first year class ED and thereby increase yield. Of course, that takes students out of the equation used by this new ranking system. If those students didn't think they needed to apply early to increase their chances of acceptance, what other colleges might they also have applied to RD? And would their ultimate choices in the Spring be the same as those they made in the Fall?</p>
<p>Even without yield in the USNWR rankings, colleges still have the incentive to stick to ED because it makes their planning easier.</p>
<p>There's an interesting explanation of this new ranking on the Brown thread. It compares it more to a chess tournament than a restaurant survey and points out that you can accumulate more points by repeatedly beating a lesser opponent, even if you always lose during the fewer matches you play with a better opponent.</p>
<p>Sac:</p>
<p>The chess tournament model is actually the one used by the authors of the study. I used the restaurant model because college decisions involve several factors, but the authors claim to be accounting for some of these in their study.
There is a very interesting chart in the study, showing the percentile SAT of admitted students at Harvard, MIT and Princeton. Whereas Harvard's curve remains pretty steady, MIT's curve rises sharply upward toward 1600. Princeton's graph, however, dips sharply between 93 and 98% and rises again at it reaches 99% (1600).</p>
<p>From the methodology used I didn't get the feeling that the authors were trying to say: this is how prospective applicants (with parental influence) SHOULD rank colleges. The methodology provides a descriptor: this is how a large group of accepted applicants DO rank colleges. It does away with all the blather about yield, etc. and it does not provide any indicator of quality or value other than the net result of the perception of those variables as expressed by the decisions of the applicants. We can debate endlessly (and some of the debates on CC do seem endless) among ourselves about quality and relative value but all of that is distilled into the decisions made by a large number of individuals come May 1. I wouldn't read more into it than that.</p>
<br>
<blockquote> <p>Also, I would have thought yield was hard to manipulate</p> </blockquote>
<br>
<p>It's pretty easy to manipulate:</p>
<p>a) increase the percentage of the class accepted binding early decision</p>
<p>b) accept fewer students and increase the number placed on the wait list. Negotiate with the wait list kids informally, sending an official acceptance letter only after the student has indicated a desire to enroll</p>
<br>
<blockquote> <p>I kind of like a rating system whereby future Ph.D. productivity is measured against SAT scores upon entrance, and selectivity, in order to come up with a "value-added" measure. </p> </blockquote>
<br>
<p>Mini:</p>
<p>I like that one a lot. It's a really good approach to finding good "admissons values" among academically rigorous schools.</p>
<p>A related measure that I used in looking at my daughter's list was a graph plotting USNEWS Peer Assessment versus USNEWS selectivity ranking. Schools with relatively low selectivity and relatively high peer assessment represent unusually good "admissions values". As with your formula, this tends to highlight schools outside of the northeast.</p>
<p>Of course, when all else fails, simply choose the school with the largest per student endowment you can get into. Per student endowment is probably the best single indicator of "quality".</p>
<p>2dsdad - perfect... your post was just what I was thinking.</p>
<p>Part of the thrust of the research article mentioned in this thread is that current rating methods provide incentives to colleges to game the rating systems. I wonder how colleges would treat applicants differently if the joint preference system proposed in the article were more widely publicized. Would it result in more offers of admission to more applicants? Would it result in colleges being more forthright about what REALLY makes their school different from other schools? (My son's reaction thus far to college meetings he has attended is that too many college representatives all read the same spiel when talking to large groups of prospective applicants.) </p>
<p>What do all of you think? What would best nudge colleges into providing more useful information to applicants, especially admitted applicants with more than one offer of admission in hand?</p>
<p>Princeton has long prided itself on having students who really, really want to go there. These are the ones who want Princeton because it isn't Harvard. They apply and get snapped up in the early round. Those left in the regular round are clearly less obsessed by the place, and when it's time to make a choice, I think nationally Harvard's name recognition is higher and Cambridge is appealing. Also, the reputation of Princeton's eating clubs drives some students away. I won't get into how distorted that is. ;)</p>
<p>Interesteddad: those are good points about how you can manipulate yield, and there are a few good examples of colleges that upped their yield by taking half their class early, notably Penn. Still, I think Harvard's yield actually went up when they went from binding to SCEA, and probably Yale's, too, and I don't think it changed anything signicantly with Stanford. Yield is important to institutions, not so much because it helps to rise in rankings but because it measures how well they do landing the students they want. That's the goal.Despite the effects of the various early admissions programs, my sense is that yield has stayed pretty constant for years, though I'd be interested in hearing of cases where there has been a big yield change. I'd also be interested in knowing what schools try to manipulate yield by accepting fewer and making more use of wait lists--that seems like a risky strategy to make a marginal gain.</p>
<p>I'm aware that the yield statistic ****es a lot of people off, and I've always wondered why.</p>
<br>
<blockquote> <p>Still, I think Harvard's yield actually went up when they went from binding to SCEA, and probably Yale's, too, and I don't think it changed anything signicantly with Stanford. </p> </blockquote>
<br>
<p>Harvard's, Yale's, and Stanford's statistics don't tell us anything about college admissions. They are aberations. Like using the birthrate of two-headed chickens to predict the poultry population of the United States. </p>
<p>The reason these schools are abberations is their unusually high yield-rates, which in turn reduces the number acceptance letters in half. Anyone trying to construct a statistical "model" of elite college admissions is going to fail if they base it on Harvard, Yale, Stanford, Princeton, or MIT admissions.</p>
<p>A perfect example, on a widespread basis, is the notion that elite college admissions are "unpredicatable". Actually, once you eliminate HYSPM from the equation, I don't think that elite college admissions are terribly unpredictable. Sure, you might guess wrong on a particular school, but it's not that difficult to identify an appropriate range of schools. Yet, because all of the media focuses on admissions at HYPSM, we have parents and high schoolers running around in a state of near-panic because they've been told the system is so "unpredictable".</p>
<hr>
<p>BTW, college admissions people seem to think that USNEWS changes their "formula" (such as dropping yield this year) specifically to change the top of the rankings. Colleges moving up or down the list makes for good headlines and good headlines sells magazines. They change some arcane piece of the formula every year, inevitably shuffling at least one top school up or down a place in the rankings. For example, Swarthmore, Amherst, and Williams have traded the top 3 spots in the rankings, like musical chairs, since they were first published, with the number one spot determined by some arcane thing added to or removed from the formula. Is Williams really any different today at #1 than it was at #3? Has Amherst really gone downhill? Did Swarthmore suddenly improve to tie Amherst for the #2 spot this year? Or did its move from #1 to #3 reflect any meaningful change? Or something something like "the rate of alumni giving" or some Dean in South Dakota reading about Williams basketball in Sports Illustrated the week he was filling out his Peer Survey?</p>
<p>"Of course, when all else fails, simply choose the school with the largest per student endowment you can get into. Per student endowment is probably the best single indicator of "quality"."</p>
<p>It's not a bad idea, but doesn't work in practice. Harvard, for example, has a huge endowment, but only a pittance of it is used for undergraduates. Perhaps spending per (undergraduate) student should be the measure. (Then, putting aside Caltech, MIT, etc., our alma mater would likely stand head and shoulders over everyone else.)</p>
<p>It would be interesting to compare "selectivity", yield, and SAT scores for those who do not require anything in the way of financial aid. If you require substantial financial aid, comparisons among offers is likely to heavily impact where you choose to go, independent of relative academic quality (among the more select schools) or "prestige". For students in need of assistance, the "game" may often simply reflect relative size of financial aid offers.</p>
<p>Nonetheless, I still think my list of the top schools in the country (Kalamazoo, Earlham, Hope, St. Olaf, Grinnell, and, maybe, Reed) would still hold - take the same student out of Harvard and put her at Kalamazoo, and it is likely that her potential Ph.D. productivity chances soar.</p>
<p>ID:</p>
<p>Actually, the authors looked at HYPSM and came up with rather interesting data, especially when comparing the SAT percentiles of students admitted to H, M and P. The grap appears on p. 11 of the study.</p>
<p>Too much can be made of this particular study. It does not seek to rank anything but student preferences. It would be foolish to apply to colleges based on the study's findings.</p>
<br>
<blockquote> <p>Then, putting aside Caltech, MIT, etc., our alma mater would likely stand head and shoulders over everyone else.)</p> </blockquote>
<br>
<p>Unfortunately, it's impossible to separate out the undergrads when looking at endowment.</p>
<p>The most recent "per student endowment" list I have is from 2000. The absolute rankings have juggled a bit since then, but at the time, the top 25 in per student endowment list was:</p>
<p>1 Princeton
2 Harvard
3 Yale
4 Rice
5 Cal Tech
6 Pomona
7 Swarthmore
8 Williams
9 MIT
10 Grinnell
11 Stanford
12 Wellesley
13 Emory
14 Amherst
15 Dartmouth
16 Claremont McKenna
17 Washington (MO)
18 Chicago
19 Carleton
20 Macalester
21 Duke
22 Notre Dame
23 Smith
24 Middlebury
25 Bryn Mawr</p>
<p>There's a style there for everybody, but I would have to say that those are 25 pretty-darn good undergrad schools.</p>
<p>Very interesting study. In one of the later tables, they indicate what % of choices the school wins against those ranked below it. One thing I am apparently not smart enough to understand is that Harvard wins 100% of the choices between it and all but two of the next twenty schools.............and yet we know that Harvards yield is only about 80% or so. Those two "facts" don't seem to square with each other. Are we to assume that the ones it loses don't attend one of the next twenty [maybe they go to Michigan, or Berkeley for tuition reasons].</p>
<p>Further, in some cases, if I read table four correctly, some schools actually lose head-to-head with schools just beneath them in the overall ranking. Rice, Williams Duke and Pomona are examples of this in the table, as I read it. Some of this relates to regional patterns of preference, I believe. Some of it also may filter cost issues in, but until you get to Berkely, UVa, UNC, UCLA and Michigan, nearly all the schools are private. Curious.</p>
<p>All in all, very interesting compilation, although the data its based on is becoming out of date as the echo boom bulge moves through admissions.</p>
<p>BTW, the issue of dropping yield is essentially moot if you continue to include selectivity. The low selectivity ranking of schools is related directly to their yield being lower and the need to accept more to fill the class.</p>
<p>I think the numbers in table 4 refer to the probability of getting more cross admits than another school in a given year, so that, if I understand the table correctly, there is a 100% probability that Harvard will get more than half the cross admits with Yale in a given year, not that they will get 100% of the cross admits.</p>