“I’d give more weight to admission rate and yield. Ultimately, selectivity is the metric that matters the most.”
Admission rate is simply a function of how many 17 yo’s find a particular college appealing, which has nothing to do with quality. Many 17 yo’s would rate Justin Bieber more appealing than the Beatles, but that doesn’t prove that JB is higher quality. These are all measures that have a lot to do with what other applicants think which strikes me as a really poor way to judge what I want or value for myself - rather like taking a survey to find out whether I should prefer chocolate chip over pistachio ice cream and valuing whether other people do so before I look at my own preference.
To the people taking the earlier poster to task for the “meh” comment about a 700 on the SAT-CR—yeah, that’s kind of silly, but the underlying message of the post is, I think, still valid, which is that we think of Harvard students as the best of the brightest, but there are students who get into Harvard who, from a stats-driven perspective, are still quite bright but look like they’re the not-quite-the-best of the brightest. Basically, inputs are heterogenous, and so talking about inputs as if every student was the average incoming student is wrong.
For my kids, retention and grad rates were really important. For S’12 it was disheartening for him to see many of his friends transfer or fail out before the first year ended. D’15 chose a school with around a 98% retention and 95% grad rate partly for that reason…it affects the campus vibe (if you will) when nearly everyone loves the place enough to stay all 4 years and graduate together. To me the 4 year grad rate is key too. No one wants to stay for 5 years unless doing co-op or absolutely has to because all courses can’t be taken in 4 years (and not because they’re full, because there are simply too many).
At my school, many people stayed for 4 and a half years to get in one last football season. They told their parents it was because they couldn’t get everything done in four years, but it was football. I thought it was isolated to my school but I have since met several adults who admit to having done this. These were not “party” schools by the way but they were not Ivy League either obviously since they had a football team worth watching. Football team worth delaying graduation for? Which survey measures that?
The more complex the metrics the more contrversial the ranking out of those metrics is. To me, the best colleges are the ones with the most productive synchronization of best professors, best studenrs and best facilities. All else including the “outcomes” should follow. So yes, faculty quality, academic qualifications of incoming students and college’s financial resources /endowment (especially the part devoted to teaching and research facitites for undergraduate students)should be evaluated. The “sychronization” part is more difficult to evaluate. Are faculty members encourage to provide quality undergraduate classroom teaching? Are there enough research opportunities for undergraduate students? Are there enough seminars or discussion based classes where students have opportunities to interact with faculty members? The current methodology has too much noise.
This is why the claims that schools sending brochures or otherwise marketing themselves and encouraging students to apply are doing it primarily to lower their acceptance rate and thus boost their USNews ranking score are bogus. At 1.3% of the total score, even a microscopically low acceptance rate will barely budge the overall score. Boosting the number of apps is probably one of the least effective ways to game the system.
As for selective schools with high graduation rates, that may not reflect a lack of rigor in the classroom. The simple explanation is that some schools weed out students on the way in (low acceptance rate and high grad rate) and others schools do it on the way out (higher acceptance rate and lower graduation rate).
Another factor is support for the students. Very wealthy selective schools, if they so choose, can afford lots of special monitors, advisors, and programs to ensure students stay on track. So there will be fewer unpleasant surprises come commencement.
“This is why the claims that schools sending brochures or otherwise marketing themselves and encouraging students to apply are doing it primarily to lower their acceptance rate and thus boost their USNews ranking score are bogus. At 1.3% of the total score, even a microscopically low acceptance rate will barely budge the overall score. Boosting the number of apps is probably one of the least effective ways to game the system.”
LOL - I agree with you completely but was just told on another thread either yesterday or today that “even microscopic differences will change the overall rate.”
College prestige is not judged purely on USN ranking, but on other measurements. One such measurement, that I see widely discussed, is a low admissions rate.
In turn prestige, as perceived by various interest groups, and which I think IS influenced by a low admissions rate (see #2 above), DOES feed more directly into the USN rankings, and at a higher rate than 1.2%. (i.e. the 22.5% combined weighting for peer assessment and HS counselor rating.). While those groups are, in turn, influenced by various factors, I suspect that they do have some awareness, as groups, of schools that have 5% acceptance rates versus those that have 40% acceptance rates.
Expected graduation rate vs predicted graduation rate is actually a common metric in rankings. It is used in USNWR and Forbes rankings among many others. The top performers by this metric will never be HYPSM colleges since if you predict a 90+% grad rate, it doesn’t leave much room to exceed predictions. In the Forbes rankings (using Forbes since USNWR is isn’t free), the top expected vs predicted were The Citadel, Salem College, University of Vermont, and UC Irvine.
“I’d give more weight to admission rate and yield. Ultimately, selectivity is the metric that matters the most.”
Why would selectivity matter more than smartness of the other students?
Using that logic, a school which promises chocolate fountains and sunshine and castle-like dorms with maid service and a winning football team, gets lots of applicants but can only accept 5%, would be better than a school which has none of the above, but has a small core of committed and very smart applicants with high SATs, 20% of whom are accepted.
Caring about selectivity is a code word for insecurity and needing to impress others. Most elite schools happen to be highly selective and have low admissions rates, but it’s not the low admission rate which is valuable, it’s the presence of the very smart students they attract.
@Data10
Performance on graduation rate, versus expectation, is not quite what I’m getting at. A school with a rather marginal incoming freshman class can boost graduation rates by simply handing out As and Bs willy nilly and coddling even serious underperformers.
Rather, what I’m getting at is a school that actually DELIVERS a tremendous education to middling, or worse, students. i.e. A college that could turn a group of underachievers into a group of overachievers.
This would be hard to measure in the real world - I guess there’s probably SOME correlation to measures of graduation rates (versus expectations), but a lot of what I’m imagining that this theoretical school imparts would be hard to measure using readily available statistics.
===
I suppose that if you had a standardized national college entrance exam, and a similar exit exam (or set of them), you could sort of get at this, but even that would be tricky (different college students study different things). And yes, I realize that the ACT and SAT are crude national entrance exams, but I’m thinking of something that is a little less IQ focused and a little more subject content focused - a battery of APs, or SAT subject tests… Maybe something like the German Abitur or the UK A Levels (I think those are HS exit exams, also used for college entrance, along the lines I’m imagining…)
700 CR is 95th percentile. Do you think Harvard admits with below a 700 CR should have a good chance of failing out? The correlation between SAT CR scores and grad rate is quite small when you consider the full application, including a measure of both GPA and course rigor, instead of looking at scores alone, like Harvard does. I expect Harvard and similar do not admit anyone who they don’t think can handle the coursework. However, less selective colleges often don’t have the option of only admitting students who all can handle the coursework and look like they are going to follow through. You also have a much larger portion of students who need to leave college or delay graduation for financial reasons at less generous FA colleges.
@Pizzagirl - I agree that selectivity can confuse broad appeal with high academic merit of the incoming class. I’m too lazy to look up the stats right now, but I’d guess that CalTech (with its somewhat narrow appeal) is lower on a selectivity measurement than schools with broader appeal (Stanford and Harvard, say), even if perhaps, the average CalTech kid is perhaps stronger on test scores and other pure-ish measures of intellectual horsepower than average Stanford and Harvard kids.
I suppose that if you had a standardized national college entrance exam, and a similar exit exam (or set of them), you could sort of get at this, but even that would be tricky (different college students study different things). And yes, I realize that the ACT and SAT are crude national entrance exams, but I'm thinking of something that is a little less IQ focused and a little more subject content focused - a battery of APs, or SAT subject tests... Maybe something like the German Abitur or the UK A Levels (I think those are HS exit exams, also used for college entrance, along the lines I'm imagining...)
I figure you could get at least a representative sample by measuring grad school admissions exams (except the MCAT, as mentioned previously) against those students’ ACT/SAT performance – judged by percentile placement.
The presumption would be that someone who scored in the 25th percentile on the ACT, but in the 60th percentile on the GRE, had been “improved” by school more than average. Conversely, someone who scored in the 80th percentile on the ACT but only in the 50th percentile on the GRE would be presumed to not have absorbed the same rigor.
Pro: Grad school exams are voluntary and already exist, so it would be less expensive to gather the data
Con: Some people take grad exams a year (or less) after graduating college, while others wait ten years or more;obviously one will retain more knowledge than the other, so “time away from school” would have to be factored in. Also, not all students would me measured, so the results, while likely fairly representative, would not be as comprehensive as the type of testing you’re mentioning.
The primary reason why MIT and Caltech have a lower grad rate than other top schools isn’t that they are doing a better job of failing out the weak students. The issue primarily relates to a larger portion of students pursuing tech degrees, and as such being more likely to put school on hold to pursue their startup, a 6-figure job following from the internship, etc. Or just losing a year for a co-op and/or co-terminal masters and having trouble finishing on time with the large number of sequential classes required for an engineering or similar major. If you instead look at freshman retention rate, MIT and Caltech are slightly higher than Harvard.
At a lot of schools aimed at the broad middle (ish) of the college population, only a relatively small % go on to grad school within a couple years of undergrad (if at all).
One way to get at this, perhaps at a state level, would be teacher certification exams. I would guess even mid-level colleges graduate a lot of aspiring teachers (as do relatively selective higher end universities). Find a state that’s big enough to have a lot of colleges within its borders (California, Texas, New York), and that has a relatively tough teacher cert exam (not sure which states might qualify), and see how the different colleges have done at getting their prospective teachers certified over, say, a 3 year period…
I would think it would mostly correlate to student quality (i.e. colleges that had higher average GPA/ACT/SAT scores for incoming freshmen would probably have higher pass rates for teachers), but there might be outliers, and you could perhaps control for GPA/ACT/SAT and other factors…
The Washington Post has an article today about the metrics used by USN for these rankings. The editor concluded that these metrics are biased towards private schools by emphasizing things such as faculty salary, class size, alumni giving, reputation etc., while unfairly leaving public schools out in the cold. The top public school UC Berkeley is ranked #20 and behind schools like Rice. Rice?!
It’s ridiculous to compare public schools and private schools. These rankings do not mean a thing until USN begin to rank them separately, as they do for liberal arts schools. As it is all the public schools are just sacrificial lambs, included in the “National Universities” rankings to bolster the position and legitimacy of the private schools.
In the end you have to wonder how much it really matters anyway. Does any prestige ho’ really think Princeton is more prestigious than Harvard just because it’s ranked #1 on USN? I think USN itself is losing its credibility because the rankings hardly ever change, and every prestige ho worth his/her salt pretty much already know what the top 12-15 schools are anyway, regardless of what USN says. And after the top 12-15(which never changes since it’s old money), the rankings really do not mean a thing.
All of this debate shows that we care; and if we care, chances are the schools care too. Heavy competition makes the products in an industry better. That’s great news for current and future students.
If the only thing these rankings really accomplish is to provide motivation for schools to improve, well, that’s a good thing.