There are multiple well-known rankings for colleges out there, and they all have different methodologies. Everyone always has different values of what they consider important, and what weight each of those factors should have. This thread is a place for people to either 1) list factors they think should be included in college rankings or 2) if desired, create a whole methodology system with factors and weights assigned to each factor.
I’ll go ahead and throw out my first draft of what would be included with my ranking methodology, and at least give some category percentages, even if not a percentage for each factor in a category.
Outcomes (25%)
• Percentage of students accepted to graduate school
• Percentage of students accepted to their top choice graduate school
• Median grad test scores (LSAT, MCAT, GRE, etc.)
• Median grad test scores as compared to expected (based on college entrance tests from incoming students)
• Percentage of grads passing licensing certifications (whether nursing, engineering, nutrition, etc)
• Percentage of grads employed in a field related to their major that requires a college degree
• Percentage of grads employed in a job that requires a college degree
Academics (25%)
• Percentage of classes under 20
• Percentage of classes under 50
• Percentage of full-time faculty (or tenure-track faculty)
• Student: Faculty Ratio
Retention & Graduation (20%)
• Graduation rate
• Graduation rate compared to expected graduation rate (based on profile of incoming students)
• Freshmen retention
• Freshmen retention as compared to expected graduation rate (based on profile of incoming students)
• Percentage of transfers
Reputation & Selectivity (15%)
• SAT/ACT scores of incoming students
• Survey from HR departments at Fortune 1000 companies
• Survey from colleges
Financials (15%)
• Financial health grade of institution
• Endowment/student
• Percentage of loan principal remaining after 5/10 years
• Percentage of students who default on student loans
• NPV at 20 and/or 40 years (see A First Try at ROI: Ranking 4,500 Colleges - CEW Georgetown)
Rather than rankings, I’d separate them out into tiers. And instead of separation by universities vs. liberal arts and national/regional, I would sort them by the Carnegie classification for size and setting (see Carnegie Classification of Institutions of Higher Education®).
• Very small and small (up to 2,999 students): Highly Residential & Primarily Residential
• Medium (3,000-9,999 students): Highly Residential & Primarily Residential
• Large (10,000+ students): Highly Residential & Primarily Residential
• Primarily Nonresidential (all sizes)
General notes on rationale:
• Outcomes: This is kind of to see, does the “market” (grad schools/employers/licensure boards) view the education as successful
• Academics: This more to gauge the quality of quantity/quality of attention that students are likely to receive while at the university
• Retention & Graduation: Are they helping students succeed? And are their success rates because of who came to them, or because of the actions of the school in making them more successful?
• Reputation & Selectivity: This is one I have difficulty with. This is trying to capture what a student’s peer group is like, and gather differences between Harvard and Directional State U that might not be fully captured elsewhere. For instance, maybe a Harvard grad is apply to the top 5-10 grad schools or other top employers, whereas the Directional State U is looking at a completely different set of grad schools and the selectivity of the two have little overlap. But I don’t know if this just continues to feed into the prestige/cachet factor, and if all the other categories’ outcomes are positive, should reputation come in here to skew things? Thoughts?
• Financials: Is the school financially stable? The NPV (though a new-ish factor) helps to see, if there are two liberal arts schools, and one’s 40-year NPV is $700,000 and another’s is $950,000, then that’s a factor (as this is more helpful when comparing the same type of university as some fields like STEM regularly have higher earnings than many liberal arts fields).
• Tiers by residential nature of campus and campus size: I don’t know many who care if their 3500 student university is classified as a liberal arts college, regional college, or regional university (or national university). But people do care about how big their university is and whether it’s a commuter school.
What would you have on your own methodology? Why? What would you take away from this one? Why?
The problem with this thread is that your post was so well thought out that I have nothing to add. I wish your dream were the reality!
That is so very kind of you to say! But your post does give me the chance to say…there’s more I would add to that methodology!
I think when I was trying to come up with that original methodology, I was primarily trying to use data that is either already available or could easily be. I think that the retention and graduation rates as compared to expected are included in the Washington Monthly’s rankings. Forbes offers financial health grades. I think Forbes also includes a reputation factor based on employer surveys, though I don’t know which employers are getting it. College Board probably has data on the grad test scores, though there might need to be some coordination to see how it would compare to expected scores based on the student profiles. And schools would need to be more open about disclosing some of the data they already have (like percentage passing licensing certifications).
So, what would I add?
The Collegiate Learning Assessment (CLA) is a test some colleges give to its incoming freshmen and graduating seniors. It “measures critical thinking, reasoning, writing, and problem solving”* rather than particular subjects like history, biology, etc (164). Unfortunately, although many colleges have students take the CLA, there aren’t many that share the results. In a study from 2005 to 2009, “36% of the students managed to graduate from college without learning much of anything” (164). Having all colleges collect and share this data would be wonderful in my dream methodology. Perhaps the feds could make it a requirement for any schools that have students receiving Pell grants or federal loans?
Also, there’s the National Survey of Student Engagement (NSSE) which would be another addition to the academics portion. It’s a survey that students respond to regarding “academic rigor, interactions with professors, and active and collaborative learning.” Getting high marks from students for these areas would also be a boost to the academics portion and give some kind of a measure on instructional quality.
*Quotes are from the second edition of Lynn O’Shaughnessy’s The College Solution
I agree that CLAs would be great to have (despite being expensive and cumbersome to get.) Anyway, for those who are interested, here is a link to a paper that explains CLAs, including some example tasks. The writers of CLA tests must have a sense of humor; one of the tasks was to destroy an argument about why a fictional college should be considered “best.”
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134.383&rep=rep1&type=pdf
The parts that are italicized above are probably more important than the raw numbers, in that they are more likely to reflect treatment effects of the college, rather than selection effects (meaning based on getting better students to begin with). Indeed, median LSAT, MCAT, GRE, etc. scores are likely to be highly correlated to median SAT, ACT scores.
So all of the other outcome measures that you mention (graduate school, licensing if applicable, employment) should also be compared to the expectation based on the profile of incoming students. In addition, outcomes are most relevant by major, so outcome comparisons that do not take into account varying mixes of majors are less useful.
This leaves out many important aspects of academics, such as breadth, depth, and rigor of course and major offerings. Perhaps small class sizes may be most important to some posters here (particularly those who strongly favor LACs), but they are not the full story. While relatively easy to measure, they can also be gamed by colleges (like those colleges that seem to have a lot of 19 student classes).
SAT, ACT is nowhere close to the full story on selectivity, especially with a greater tendency for test optional and test blind these days.
Any survey criteria also need to be done by major to have any value (although employment and graduate school outcomes relative to incoming student profile may be more revealing).
Any financial aid and ROI type comparisons must be in the context of students’ chosen majors and incoming student profile (including incoming student financial status) in order to be relevant.
“Without learning much of anything” that is tested by the Collegiate Learning Assessment (as opposed to other things that they may have learned). Example questions: Sample CLA Tasks
Note that whether a student’s lack of improvement on the CLA is a bad thing depends on what level the student was at when beginning college. A student who does well on the CLA at the beginning of college may not have that much room to improve by the end of college.
I wish there was a ranking that was able to capture campus academic culture in a way — not so much from best to worst, but more where a school falls on a spectrum from “highly intense” (although there is probably a better word) to”hard-working and collegial, but easy access” to “relaxed, independent pace.”
Different students will thrive in different environments.
Some are energized by being surrounded by some of the best thinkers in the field in a competitive atmosphere where each scholar keeps outdoing the last achievement. These are students not intimidated or bothered by weed-out classes, low percentages admitted into many majors, competitive interviews/auditions/case studies to participate in clubs and organizations, etc.
On the other end of the spectrum, some students do best when they can study what they prefer when they prefer, without hoops to jump through. They don’t want a high hurdle to passing core classes in which they may have little interest. Some may excel more in schools that do not have a traditional letter grading system. Another subset may have higher success in a school with stronger academic supports.
In the middle of the spectrum are schools with some faster-paced programs and robust core classes, but open access to most majors, clubs, and organizations.
A student studying math at Reed versus MIT versus Pomona versus UC Berkeley versus SUNY Polytechnic versus Ursinus versus Antioch is going to have very different experiences. It would be nice to find a way to capture that.
I feel like you are heading toward a sort of multi-dimensional college “personality” classification ala the Myers-Briggs Type Indicator (MBTI):
As explained there, because they have four binary categories, you end up with 16 classifications. In more sophisticated versions, you can have sliding scales on multiple dimensions. That implies a multi-dimensional space, and while we are pretty used to thinking in three dimensions, and indeed drawing three dimensions in two dimensions, once you get to four or more, visualization becomes a challenge for many people (hence why, say, post-Einstein physics can be a real intuitive challenge).
Anyway, each college could get a score on multiple “personality” dimensions, and then with creativity you could have some sort of tool to explore colleges in a similar space.
Which sounds great, but–if you read through that MBTI article, it turns out the whole thing is pretty problematic. I fear the same is true for colleges. We’d love to be able to classify colleges like this, but in practice that may not turn out to be a great way of predicting which colleges will provide the best experience for which kids.
I will tell you my own two cents is this is why the dating/marriage metaphor for college choice is an apt one.
Understanding something about human psychology as applied to your own psychology may help you understand yourself a bit better, help you avoid some pitfalls, improve your behavioral patterns, and so on. But in the end, when it comes to falling in love and committing to a long-term relationship, there is a big part of that which can’t be done on paper, you have to be with a person and it has to just click for you both.
And I think it is really the same with colleges. You can, and should, get leads based on paper profiles and such. But then you ideally visit, and fall in love.
Of course if you don’t fall in love, there is always a chance you would if given time. But there are so many fish in the sea, and life is too short, and whatever other cliches you want to apply. So typically you can fill out a whole college list with places which look good on paper AND where you clicked intuitively.
So why not do that?
I think there are too many individual factors that rankings try to implement, so I tried to simplify them to tools that students can use as a broad reference for strength in field, nothing more nothing less. An applicant likely doesn’t care about the outcomes of a separate program. Same for aid, a student with an EFC of 10000 will not get the same aid as a student with an EFC of 60000 and vice versa.
I think those should be separate rankings or they should be excluded from rankings as things for students to research themselves based on individual factors.
30% Are Students Surviving
Percentage of students retained in major
Graduation rate in 6 yrs of major
Percentage of students entering jobs or graduate school within 6 months compared to entering class selectivity
70% In context of the class’s selectivity, how well are the students performing
Median Grad School Test Scores (LSAT, MCAT, GRE, GMAT) compared to entering class selectivity
Median Salary compared to entering class selectivity
Percentage of labs with x or more undergraduate assistants/researchers (not decided on a specific value)
Percent of students attaining national awards compared to class stats
Selectivity would be a combination of class rank, sat, and acceptance rate.
Selectivity, Salary, and Are Students surviving is based solely off of data from the major unless the university admits to the university and not a major. (Not every major but subjects like Physical Sciences, life Sciences, Mathematics, Philosophy, Art, Business, etc)
Percentage of labs with undergrads and percent of national awards is based off broad field (stem, business, humanities)
This may vary significantly within a college, since different majors / departments or student goals may induce different kinds of student behavior.