<p>His complaint points to more a problem of interpretation, not measurement (to my reckoning). Of course students leave for a variety of reasons, some of them quite good ones that are not a college's "fault" at all. The scholar who first promoted the idea of expected graduation rates has also stated that 50% of the variance in graduation rates can be attributed to non-institutional factors.</p>
<p>It's an imperfect indicator, but it adds another dimension to graduation rates, just as do considerations of SES and other factors discussed here.</p>
<p>i know this is pretty off topic, but i feel like this is also an important ranking system, despite the fact that it's from about 2 years ago. </p>
<p><a href="http://www.wsjclassroomedition.com/pdfs/wsj_college_092503.pdf%5B/url%5D">http://www.wsjclassroomedition.com/pdfs/wsj_college_092503.pdf</a></p>
<p>The fact that the WSJ feeder school ranking is from two years ago is not the problem. The fact that they only looked at a total of 15 professional schools in the entire world and did not look at any grad schools in the arts, humanities, or sciences is a problem. The fact that only one school west of Chicago or south of Baltimore (a Calif Med School) was on the list is a problem.</p>
<p>An even bigger problem is that they only bother to look at a single year of data. There is enough year to year fluctuation that you really have to look at a five or ten year period to have any meaningful data. The raw numbers admitted from any one undergrad college in a single year to such a small sample of grad schools is very small. For example, if you change Pomona's numbers by just two or three fall enrollees (perhaps because a few of the Stanford or Berkeley bound grads opted for UChicago instead), their ranking would shift dramatically.</p>
<p>By looking at a single year and by limiting the selection of grad schools to the northeast US (from Chicago to Boston), you reduce the value of the data.</p>