<p>
Of course it looks right-- it’s solely a research productivity based ranking. DMouth not being included should tell you a little something.</p>
<p>
Of course it looks right-- it’s solely a research productivity based ranking. DMouth not being included should tell you a little something.</p>
<p>Nothing needs to be “bumped” anywhere. It’s a composite NRC ranking, not a matter of opinion. :p</p>
<p>I do look forward to the new NRC ranking, though. If it comes out in my lifetime.</p>
<p>Penn Penn Penn</p>
<p>Tk, welcome to the real world challenge for modeling of college rankings. Data availability is always the key issue for the model development. As of April 9, NRC indicated their intention to release long-awaited Doctoral Programs Assessment Data soon, but the actual date was not given. Youll be able to update your model once the data become available.</p>
<p>Folks, a reliable model needs appropriate data support. Unlike NRC which collected their survey data from colleges, we collect our survey from this thread. As I mentioned previously, I believe you became experts before you commit/invest 4 years of your lives and money to your targeted and/or dreamed universities. For this very reason, I value your opinions. There isnt any pre-existing list. </p>
<p>Please continue posting your new top625 in a tiered format.</p>
<p>
</p>
<p>maybe Stanford, but Princeton is not excellent on all programs. And, as for Brown, i’d pull it down so it would be out from the top 20. Sorry; i don’t think Brown would be that excellent for this specific measure. It doesn’t even deserved in the top 30, to be honest. But it’s a very good school for undergrad nonetheless. I’m just not convinced it’s that great like some would put it.</p>
<p>
</p>
<p>after HYPSM then,</p>
<p>Tier 1 - Caltech, Berkeley, Columbia, UPenn, Chicago, Northwestern, Duke, Michigan, Cornell, Jopkins, Rice</p>
<p>Tier 2 - Dartmouth, Brown, Emory, Vanderbilt, Georgetown, Notre Dame, UCLA, WUSL, UVa, CMU, Georgia Tech</p>
<p>Tier 3 - USC, NYU, Brandies, Tulane, W&M, UNC, Texas, UIUC, etc</p>
<p>
</p>
<p>Are we still talking about research productivity or a general ranking? The distinctions between HYPSM honestly come down to nothing more than fit and particular program; there aren’t many schools that can say that. I don’t see how any one of the five is less excellent on all programs than any other; SM are obviously top engineering and HYP split particular math/science disciplines. There’s no justification for putting any one of the five on a tier below in a general ranking of quality. A totally undecided major would not be at a disadvantage going to any of them.</p>
<p>^ a combination of both, actually.</p>
<p>Well, then, there. A choice between HYPSM is basically one of particular department and social fit. An undecided major can’t go wrong with any (except if interested in engineering, then SM probably) unless he is considering an extremely obscure/very specific major. HYPS especially will all provide a very well-rounded education, and not any school is excellent in everything – check the departmental rankings.</p>
<p>Harvard, Yale, Princeton, Stanford, MIT</p>
<p>Brown, Cornell, Columbia, Dartmouth</p>
<p>Johns Hopkins, Northwestern, UPenn, Duke, UChicago, UC Berkeley, Michigan</p>
<p>WUSTL, Georgetown, UVa, Rice, Emory, Vanderbilt…</p>
<p>I like tk’s ranking. It seems to me the most comprehensive. We can go simply by acceptance rates or SATs but what does that give us beyond HYPS? just a mashup of schools that are highly competitive to get into but don’t have the academics to match.</p>
<p>Phead’s look good to me, except I may bump UChicago up one-- for undergraduates, of course, and excluding LACs.</p>
<p>
</p>
<p>Rice has the third largest endowment per student among all these schools.
Money talks if it’s managed well. So, If the NRC re-did its assessments, Rice is a school I’d expect to move up.</p>
<p>"Tier 1 - Caltech, Berkeley, Columbia, UPenn, Chicago, Northwestern, Duke, Michigan, Cornell, Jopkins, Rice</p>
<p>Tier 2 - Dartmouth, Brown, Emory, Vanderbilt, Georgetown, Notre Dame, UCLA, WUSL, UVa, CMU, Georgia Tech</p>
<p>Tier 3 - USC, NYU, Brandies, Tulane, W&M, UNC, Texas, UIUC, etc"</p>
<p>^ Heck yes RML, this is the best ranking I’ve ever seen! Couldn’t agree more</p>
<p>tk:</p>
<p>Data usually come and/or derive from survey results. In your model, subjective matters such as how much weigh should be assigned to and/or distributed among library size, faculty resources/productivity, SAT, % acceptance, class rank, and reputation may vary by different age groups. In our focus, we target on the seniors of class 2009. Why don’t we let the Class2009 seniors decide those subjective matters (by getting the feedback from survey?) </p>
<p>As you go through the model development processes, you will realize it takes several surveys to get your model calibrated and then the predicted results (rankings) from the calibrated model needs to be validated by peer review. The type of survey we obtain/derive from this thread (at its current setting) is reputation rankings (Survey1). </p>
<p>After receiving seniors-decided weigh of the aforementioned subjective matters (Survey 2) (we may start to post survey questions soon), we calculate the ranking and compare the calculated rankings with other sources e.g. USNews, NRC, ARWU, THES-QS, and HEEACT to see if there are any big surprises. If yes, based on your best professional judgement, you may move around those surprises (Model Calibration or fine tuning). In this step, if you are using tiered ranking, all you need to know is if targeted universities belong to a given tier and which tier those surprises belong to instead of finding /guessing their exact rankings. After model is fine-tuned, the model predicted ranking needs to be validated by peer review. (Survey3)</p>
<p>RML’s rankings are sound, but UNC and Ga Tech should swap places.</p>
<p>
</p>
<p>That sounds like one interesting, systematic way to accomplish the goal.
As long as you are building a predictive model that is testable, I don’t think it matters whose opinion you use to set the weights. At that point, you are making an hypothesis.</p>
<p>Meanwhile, here’s an hypothesis: when the NRC releases its rankings, schools with large endowments per student, relative to the old rankings, will tend to move up in the new rankings. Schools with small endowments per student, relative to the old rankings, will tend to move down. Rice, for example, will move up; UPenn will move down; Georgetown will stay about where it is. One could construct a model to predict the degree of change, combining an endowment ranking with the previous NRC ranking. Then wait and see how accurate is the prediction. Or, use something like your iterative process of comparison and calibration, with a final peer review to arbitrate among any remaining discrepancies.</p>
<p>Anyway, I like the idea of moving schools up and down among tiers, and experimenting with different objective factors and weights to drive the process. That’s more interesting to me than whether you select the factors and set the weights by survey, or by intuition.</p>
<p>In modeling business, your approach is called model sensitivity analysis (SS). It usually includes a systematic of tests on each criterion in the model in order to understand the relative impact of that individual criterion to overall model projections. In general, SS is performed after model calibration. A reliable model relies heavily on defensible data since no one can argue with data. Just my experience.</p>
<p>Harvard, Stanford, MIT
Princeton, Yale
Berkeley
Caltech, Chicago
Cornell, Columbia, Penn, Johns Hopkins
Duke, Michigan
Brown, Dartmouth, Northwestern, Virginia
UCLA
Carnegie Mellon, WUSTL, UNC, Wisconsin
Vanderbilt, Rice, Georgetown, Georgia Tech, Illinois, Texas
Emory, Norte Dame, USC, Washington</p>
<p>^ Can’t beat that list…</p>