<p>I hope you do not believe that I am attacking you. I apologize if that's how it sounded. I honestly am just curious to see how the rankings look using your methodology, which I believe, although it isn't perfect, is a large improvement over US News.</p>
<p>No, not at all. </p>
<p>FYI-Based on your recently posted tiers of colleges, I think we would agree on most of the schools. I'm not sure we'd get there the same way methodologically speaking, but I liked your list.</p>
<p>I actually posted the "How much they learn" because it is the factor (obviously) that is unaccounted for, yet the reason they're there. So, I said it somewhat cynically I guess.</p>
<p>But, when one looks at all these dumb rankings, one needs to remember -- OH! The MAIN thing is missing entirely!!!</p>
<p>The National Survey on Student Engagement (NSSE) is one way to assess the learning opportunities on campus. USNWR is planning on including more information from this, when schools provide their results. (There's a thread about this somewhere.) I'm with USNWR on this one - I wish more schools made their results public.</p>
<p>I think rankings are somewhat subjective, because it all depends on the weighting. However, I do believe publishing the data in a searchable / sortable format provides a service to both students and parents.</p>
<p>I think "outcomes" are relevant measures. For undergraduate programs, both graduate school admissions and employment placement are outcomes worth tracking and comparing.</p>
<p>so what are the rankings under this system?</p>
<p>I find that Student to Faculty ratio can be misleading as many of the professors at some institutions do not teach fulltime (i.e. only lectures and not smaller sub classes) and skew that number into being lower than it actually is. But perhaps this is just hearsay. It's hard to know exactly what you're talking about these days.</p>
<p>And I still think cost of education is in no way related to quality of education, but cost efficiency, definitely.</p>
<p>I'd love to see a rough list based on your criteria, hawkette.</p>
<p>For Xeneise and others,
I am lacking information on all of the categories that I created in the Opening Post so I will not be able to give you a complete new rankings based on the proposed methodology. However, I am working on some of the pieces where the data is available. One such category is Graduation & Retention Ranks. As the OP shows, I weight this category at 10% of the overall score and the sub-category weightings are as follows: </p>
<p>1% for freshman retention
3% for 4-year graduation rate
3% for 6-year graduation rate
3% for graduation rate differential</p>
<p>I created groups rather than absolute rankings as the % differences were often very small and I didn't want to create huge differences in rank (and score) based on very small differences in school performance. Specifically, here is how I assigned points:
Fresh Ret Points
95%+ 4
90-94% 3
85-89% 2
80-84% 1
< 80% 0</p>
<p>4-Year Points
90%+ 4
85-89% 3
80-84% 2
75-79% 1
< 75% 0</p>
<p>6-Year Points
95%+ 4
90-94% 3
80-89% 2
70-79% 1
< 70% 0</p>
<p>Diff Points
5+ 4
2-4 3
-1 to 1 2
-2 to -4 1
< -4 0</p>
<p>Applying this weighting and the assignment of points led to this ranking of the USNWR Top 40 based on Graduation/Retention data. Some of the results are surprising and impressive (Boston College, U Virginia, W&M on the upside and Stanford, Caltech, Rice, Emory and Carnegie Mellon on the downside). Here are the full results:</p>
<p>1 Princeton
1 Notre Dame
3 Harvard
3 Yale
3 Boston College
6 Columbia
6 Northwestern
6 Brown
6 Georgetown
6 U Virginia
6 W & M
12 U Penn
12 Duke
12 U Chicago
12 Cornell
12 Tufts
17 MIT
17 Dartmouth
17 Wash U StL
17 J Hopkins
17 Vanderbilt
17 Brandeis
23 Stanford
23 U Michigan
23 U North Carolina
26 Wake Forest
26 Lehigh
28 Cal Tech
28 Rice
30 Emory
30 U Wisconsin
32 UCLA
33 UC Berkeley
33 USC
35 Carnegie Mellon
35 NYU
35 U Rochester
38 UC SD
39 Georgia Tech
40 Case Western</p>
<p>I have been struggling with a similar problem, I tried to start a string on institutional/academic effectiveness to solicit some ideas. Places like US News are good at providing objective data for what I would call "inputs". "Outputs", or results, are tougher to quantify. I was thinking about an (imperfect) measure along the lines of this ratio:</p>
<p>(%ile rank on GREs, LSATs, MCATs)/(%ile Rank of Incoming Student SATs). This would clearly be directed at how much relative progress the typical student taking graduate exams made as a result of going to a particular school. Clearly there are problems with this measure ALONE that would punish the better schools since it does not look at either absolute exam scores OR the percentage of the student population going to grad/professional schools at all. Perhaps it could be one measure, along with absolute %ile ranking, % of total population going to grad/professional schools, and % getting jobs in their major field. The statistic itself might be more meaningful if you compared mean %ile on the grad exams vs. the SAT %ile of the top 25% of entering freshmen.</p>
<p>weldon,
Are you making any judgments about the quality of the graduate schools or just going to grad schools going to be sufficient? Also, what about the different cultures at various schools where some are very likely to pursue graduate studies where others are not as wired in that direction? The lack of desire to pursue graduate studies does not mean that these are inferior students or that their outcomes are inferior? How would you adjust for this?</p>
<p>You have wayy to much free time on your hands OP.</p>
<p>And you people realize this debate is inherently flawed. Every person looks for different things in different colleges. If I were to rank colleges, I would put career job placement, strength in my major, recognized prestige by employers at the very top. Along with average SAT scores, alumni giving, etc toward the bottom. This debate is moot, just stick with the us news and world report, as flawed as you perceive it to be, it is the most widely accepted and could be perceived as the norm.</p>
<p>Undergraduate rankings are inherently flawed in that they assume quality of schools can be measured in the same manner for every school. Which they can't.</p>
<p>hawkette: yes, you have identified two other issues with my approach: 1) I deliberately left out the QUALITY of the grad programs; even though this becomes an unaccounted-for difference, I did this simply out of practical limitations-i.e., how far can you take something like that and still have a practical methodology? As to the issue of employment vs. grad school, I had meant to say that that is a very tough situation to deal with, since they are both "good" outcomes and yet better statistics in one can adversely affect the statistics for the other. For that indvidual poster who suggested this is basically going too far, I can see your point of view, but to me the basic questions here are really good ones, that goes beyond mere questions of college rankings, namely, how appropriate is it judge an institution (or judge a solution to any problem, for that matter) by the resources you commit to it, as opposed to the results you get? For example, can you solve a problem with the public schools simply by increasing the budget (change the input, a bureaucratic response), or do you need to look for meaningful indicators of improved results? And also, is there really no way to start to segregate results (output) due to effectiveness of THE INSTITUTION ITSELF from mere differences in the starting populations you are dealing with?</p>
<p>s snack,
I agree that all rankings will have flaws as they are applied to different groups of students. So, all rankings should be taken with a grain (or more) of salt. But college rankings are a fact of life and they do have some influence over many students in their college search process.</p>
<p>My personal belief is that there are some absolutes. Specifically, I have four guiding themes for evaluating a college environment and how likely a college is to deliver an outstanding undergraduate experience:</p>
<p>1) Student quality
* It is preferable to have higher SAT scores than lower SAT scores.
* It is preferable to have students that have high school class ranks in the Top 10% than not.<br>
* Some might include measures here to include gender and ethnic diversity as well as international students.</p>
<p>2) Size & Nature of the classroom
* It is preferable to have smaller classes than larger classes
* It is preferable to have faculty who are full-time
* It is preferable to have professors rather than TAs</p>
<p>3) Quality of the faculty
* It is preferable to have an accessible faculty that is well regarded by all of the stakeholder groups (other academics, students, alumni, employers)
* It is preferable to have professors who themselves are well trained in their field </p>
<p>4) Institutional ability and willingness to spend to assist undergraduate students and faculty
* It is preferable to have adequate capital for current and future needs and an operating policy that seeks to enhance the learning experience of current students
* It is preferable to have professors who are adequately compensated</p>
<p>If we can agree that these are valid themes and goals, then we can begin to break each down and, where possible, try to create measurable benchmarks. It will never be exact, but it will create some basis for making comparisons among schools. </p>
<p>There are a few things missing from this analysis that I believe also belong in a college evaluation and a college search process. Those would be the student outputs (job placement, grad school placement), the cost, and the effectiveness of the school to retain and graduate its students. </p>
<p>Do you like these ideas and, if not, give me some feedback on why these may not be valid? Also, what ideas would you suggest (social component? weather? athletics? accessibility? ???).</p>
<p>Thanks to hawkette and others for providing such stimulating conversation, which I a bit late in joining. Good thoughts from afan - post #19.</p>
<p>What is the goal of coming up with an undergrad ranking methodology? Is it just to produce a more accurate list than USNWR's rankings for everyone to look at, or is it to allow individuals to create their own lists based upon the criteria that is important to those individuals? one size fits all vs. custom approach.</p>
<p>My interest is in providing a tool for individuals to rank schools. Collegeboard and princetonreview provide a selection tool, but do not provide anything really in the way of a ranking tool. I don't see what would be so difficult in providing a tool that would let individuals assign their own weights (including 0%) to the various criteria.</p>
<hr>
<p>With respect to 4-yr grad rates, the underlying assumption is that all students are in 4-yr programs, which is not always true. A better measure would be % of students in 4-yr programs that graduate in 4 years. However, I don't know if that information is even available. One indirect measure of 5-yr programs is to look at the difference between 4-yr and 6-yr grad rates. Schools with a significant number of students in 5-yr programs (10+%?) show larger than normal increases from the 4- to 6-yr grad rate. For instance, Va Tech has a lot of 5-yr programs. Their 4-yr grad rate is 47%. However, the 5-yr grad rate jumps to 72% and then levels off with only 4% more graduating in 6 years. Of course, with a roll-your-own ranking tool 4-yr grad rates can be omitted in favor of the 6-yr rate.</p>
<p>st. andrews,
My understanding is that the 4-year grad rates as reported in the CDS apply only to those students involved in 4-year programs. Likewise, for 5-year students. The USNWR benchmark is 1.5x the target time so that would mean 6-year and 7.5-year graduation targets and these are the benchmarks that they use. I suggest retaining these and adding the 4-yeat (and 5-year where applicable) benchmarks to reward schools for getting students through on time.</p>