Are you saying that either Cal Tech or Harvard doesn’t have high peer quality? Or that one of them isn’t doing so great at getting their students to grow academically?
I don’t think it is either. If thinking of outcomes, I’d include things like:
• Percentage of students accepted to graduate school
• Percentage of students accepted to their top choice graduate school
• Median grad test scores as compared to expected (based on college entrance tests from incoming students)
• Percentage of grads passing licensing certifications (whether nursing, engineering, nutrition, etc)
• Percentage of grads employed in a field related to their major that requires a college degree
• Percentage of grads employed in a job that requires a college degree
• Graduation rate
• Graduation rate compared to expected graduation rate (based on profile of incoming students)
• Survey from HR departments at Fortune 1000 companies
• Percentage of loan principal remaining after 5/10 years
• Percentage of graduates who default on student loans
• NPV at 20 and/or 40 years (see A First Try at ROI: Ranking 4,500 Colleges - CEW Georgetown ), particularly if available by major area (humanities, engineering, social sciences, business, etc, as each area is not expected to have the same results
- The Collegiate Learning Assessment (CLA) is a test some colleges give to its incoming freshmen and graduating seniors. It “measures critical thinking, reasoning, writing, and problem solving” rather than particular subjects like history, biology, etc (164). Unfortunately, although many colleges have students take the CLA, there aren’t many that share the results. In a study from 2005 to 2009, “36% of the students managed to graduate from college without learning much of anything” (164).
(the above are taken from my posts 2 & 4 over on this thread: Create Your Dream College Ranking Methodology)
Should I infer that you would rank U. of Washington and Rutgers lower than BC and Tufts because they had lower graduation rates?
According to the data provided at Washington Monthly, these are the 8-year graduation rates as compared to the predicted graduation rates based on percetage of Pell recipients, incoming SATs, etc. I’ve sorted them by who had the best improvement according to what the predicted outcome was.
- U. of Washington-Seattle: 84% actual, 80% predicted, +4% difference
- Boston College: 92% actual, 90% prediced, +2% difference
- Rutgers: 82% actual, 81% prediced, +1% difference
- Tufts: 94% actual, 94% prediced, 0% difference
Frankly, I’d be more impressed with the schools who are outperforming the expectations. And this is not necessarily a public/private college thing. There are public colleges mentioned in this thread that have negative percentages (like a -8%) and privates, too (-7% or even a -22%! difference). To me, over or underperforming tells a story about what is happening at an institution, and I don’t think that USNWR captures that at all. In fact, that’s what I’m getting to in my question below:
To clarify…would people rather have their students attending schools that are overperforming expectations and really pushing their students, but do not have the “peer quality” that some are concerned with, or would they rather go to a school that has the desired “peer quality” but that is really underperforming and not providing as much as desired in terms of academic growth (see the CLA example above).
ETA: Changed default rates language from “students” to “graduates” as I have recently learned that cohort default rate data is currently shared on all students who attended, including those who dropped out, and not just on those who received a diploma. For this measure, I only want those who’ve earned a diploma to count.