The Everlovin' Undergraduate-Level University Rankings

Here are a few 4y graduation rates from Kiplinger’s:
88% JHU
86% Cornell
86% UChicago
85% Cal Tech
82% MIT
… DROP…
49% Oklahoma City University
36% Liberty University
28% Florida International U
26% University of Louisville
12% CSU - Long Beach

Do schools below the drop line have much higher dropout rates because their classes are objectively more difficult? Or is it that students in colleges above the line arrive better prepared for college then get better financial aid and support services?

Here are a few average GPAs from gradeinflation.com for 2013-2015 (latest year reported) in column 1 with 4y graduation rates from Kiplinger’s in column 2:
3.39 90% Princeton
3.38 88% JHU
3.40 87% Vanderbilt
3.37 87% Villanova
3.36 82% WF
3.55 81% Rice
3.36 63% CWRU

The average GPAs don’t seem to vary as much as the graduation rate differences might suggest. Nor is the GPA variation consistently in the same direction as the graduation rate variation.
It may be the case that, among schools with similar admission stats, dropout rates are in fact influenced by course rigor. However, any correlation probably is colliding with several confounding factors.

In theory, it may be a good idea to use both instruction quality and course rigor in college rankings.
In practice, I don’t think the Niche surveys or the dropout/graduation rates necessarily are measuring those features very well. I suspect that if you just rank by average entering test scores, you’re getting (“for free”) a better indication of both instruction quality and course rigor (across a broad range of colleges) than you get from the Niche professor rating surveys or dropout rates, respectively. In general, top students and top faculty tend to follow the money to rich, selective colleges. Although, once to limit your focus to the top 20/30/40, different correlations collide and the detailed rank order becomes sensitive to the specific metrics you choose.

@tk21769 I wonder which Niche numbers indicate that Yale students aren’t happy.
It appears you used the ranking IzzyOne derived in post #123.”

Yes, I just used the rankings they provided. There could be other missing colleges.

@tk21769 do schools below the drop line have much higher dropout rates because their classes are objectively more difficult? Or is it that students in colleges above the line arrive better prepared for college then get better financial aid and support services?”

I concede that a high dropout rate doesn’t mean that a college is harder when comparing a university that has a 1500 SAT score to one that has a 2000 score. It’s also likely that the students transfer to a better college. But students at top 30 schools are unlikely to transfer to better schools in large numbers…

However, the schools that we’ve been comparing are top universities that have SAT scores of 2000+.

So, if 2 schools have the same SAT scores and different grad rates you can compare them.

Likewise, if 2 schools have the same SAT scores and different GPAs you can compare them for difficulty. You can also eliminate the difference in ciriculum math vs liberal arts using the STEM percentages.

So, I can infer from grad rates, gpas, stem percentage, and sat scores that RPI is a more difficult school than most ivy schools. Wake? No I don’t think Wake is harder.

@tk2179 , I’m not at all convinced that just ranking by average test scores will really give you a good picture of undergraduate instruction quality. Universities can put their resources and focus into a combination of 1) research 2) graduate study and 3) undergraduate study. Based on what I’ve seen, that focus and undergraduate quality can vary significantly. It isn’t perfect, but Niche might give some insight on this that USNews doesn’t.

I think I cited this earlier. There was a professor time study done at the University of California. It came out roughly 50% research, 25% graduate study, and 25% undergraduate study. Given the high numbers of undergrads, some of them might feel they don’t get as much focus as they expect (or value for money). The Niche data indicates that might be the case.

If we just conclude USNews is the best we can do, we can end this thread now.

I’m not convinced it gives you a very good picture of that, either
(only that it might be better than … or at least as good as … averaging a small number of responses about vaguely-defined instructor qualities). I’m just questioning whether the Niche surveys support your objective very well.

I do think that colleges with very high average entering test scores will tend to employ a higher concentration of distinguished scholars than colleges with much lower average test scores. They may need to be reasonably “passionate” and “engaging” just to get through the hiring process at top schools. Whether they’re also as “caring” and “available” as professors at an average college, I don’t know. But then you have to weigh how much those attributes matter to instruction quality compared to other attributes (like knowledge) or structural features (like class size).

Based on your approach, LACs with similar stats would do the same as research universities, since the best proxy is stats. But if you look at the Niche poll scores, the LACs consistently do better. You may think this comes down to the vagaries of the poll questions, but why would it so consistently favor the LACs? All respondents are answering the same questions.

I don’t think all institutions are created equal (as long as they have equal stats). It could just be that some research universities put more emphasis on research and graduate study to the detriment of undergraduate study, and that is what the data shows.

^ I’m not saying that the “best” proxy is stats or that using scores alone would be my ideal approach.
The detailed NSSE surveys (perhaps combined with stats) would be closer to my ideal, but unfortunately they aren’t easily available for our purposes. I’m suggesting that even using a metric as simple and widely available as average SAT scores (perhaps combined w/class size … which you get from USNWR) may do at least as good a job of differentiating universities, w.r.t. instruction quality, as asking as few as ~60 students per school whether their professors are passionate, engaging, caring, and available. I’m thinking more in terms of the broad spectrum of research universities, not focused just on the USNWR top N (as you and @Greymeer seem to be).

I’m not surprised that LACs do consistently better on those questions. I’m also not convinced those questions are especially good at determining instruction quality. Still, if you’re finding that a set of selective LACs do consistently better on those questions than a set of similarly selective research universities - and if you do believe those 4 qualities are good indicators - then you may be on to something. At the very least, by aggregating the responses, you’re more likely to arrive at a representative sample size.

I’m not saying Niche and its questions are perfect. I’d have used something with more rigor if I had it. I just think it illustrates some measures of undergraduate specific quality completely missing in USNews, which to a large extent uses wind vanes to measure rainfall.

If USNews would simply replace the school counselor academic rep survey with a student survey of undergraduates – gathering info on perceived rigor, overall academic quality, professor interaction and availability, classroom discussion dynamics, research/internship opportunities, course availability, and overall academic satisfaction – that would help.

Also, reiterate and make sure the deans/provosts/college presidents understand what their job is on their academic rep ratings: you do rate schools you are familiar with, on their undergraduate academic quality; you neither rate schools you are unfamiliar with, nor based on grad-level prestige. (we don’t know that they do the latter, but they might if they get lazy…)

We can aggregate the Niche instructor-quality responses for the US News national universities ranked in the top 20 and for the national LACs ranked in the top 20. For each school, average the agreement rates for 4 attributes. Then calculate each school’s aggregate “agreements” by multiplying the 4-attribute average times the number of responses. Example:
School responses (passionate% caring% engaging% approachable%) avgagreement% -> aggreements
Princeton 28 (96% 82% 89% 89%) 89.00% -> 24.92 agreements

After calculating “agreements” in this way, I summed all agreements for universities and divided by the number of university responses. I did the same for LACs.

Result:
On average, 1441 of 1705 university students (85% ) agree that their professors have Niche-identified positive qualities.
On average, 563 of 587 LAC students (96%) agree that their professors have Niche-identified positive qualities.
(@IzzoOne, do these averages seem to be consistent with what you found?)

It appears to me that the aggregate samples for universities and LACs are large enough to be confident that T20 LACs students are more likely than T20 university students to believe their professors have these vaguely-defined attributes. For a population of 250K students, to achieve a 90% confidence level with 4% margin of error, you’d need 420 or more responses.

However, among the T20 universities, Niche on average only received 77.5 responses for each.
The responses for the T20 universities range from 93.5% (Rice) to 72% (JHU) average agreement.
It appears to me that in many cases there are not enough responses to support a margin of error small enough to differentiate most universities in the USNWR T20 across a range of agreement rates that narrow. Ditto for the T20 LACs, which range from 97.5% avg agreement (Williams) to 88% avg agreement (USMA) with an average of only 31 responses.

Also, note that there is more variation in school size (and perhaps in research investment levels) among T20 universities than among T20 LACs. Berkeley has the 2nd-lowest agreement level among T20 RUs and contributed by far the largest number of responses. The top 2 RUs (Rice and Dartmouth) have agreement rates as high as (or higher than) 5 of the T20 LACs.

What would we need for a decent sample size – 10% of each school?

We could come up with some survey questions to recommend to USNews. The worst they can do is say No.

US News is far from perfect, but Niche is just terrible.

It’s like taking a cold, stale hamburger and trying to improve it by adding motor oil as a condiment.

Sample size is derived from confidence level and how representative the sample is, which is the problem you have with surveys (and I mean not just for education but marketing in general). You have to get an unbiased sample, which is tough, typically it’s the disgruntled students who will fill one out or go to rate my professor. You could strip out the extreme responses as a first step, but if your responses are extreme (50% love the prof, 50% hate)…

Google for sample size calculators.

To achieve a 95% confidence level with a 5% or smaller margin of error, for a population of 2000 students, you’d need at least 323 samples. If you keep that confidence and MOE constant, the required sample size doesn’t increase too much over college-sized populations. But if your surveys of T20 colleges are getting results that are overlapping within the margin of error (and your goal is to differentiate them), you’d want to decrease the MOE, hence increase the number of samples, right? To get a 3% MOE for a population of 10K undergrads you’d need 965 survey samples.

You’d have to figure out a way to elicit thoughtful, honest responses, and ask the questions in such a way as to avoid emotion-based responses – make them think a bit before responding.

Yes, you’ll get a few squeaky wheels and cheerleaders, so I like the idea of discarding some of the best and worst ratings – maybe 5% of each?

Sample size calculators compute variance, not bias. If a survey is not conducted properly, then increasing the sample size will still give you the wrong answer, but just with more apparent precision.

A survey of students is going to be problematic because student populations are different at each college, and students don’t have any basis for comparing their college to another college. Not to mention that students are often making their judgments on trivial issues, rather than those that really matter. And those are just the problems if we took a random sampling. Web sites like Niche have the additional problem of soliciting responses from students who often have some agenda to fulfill by making a comment.

Look at all of the incorrect poll results before the November 2016 election – the sample sizes were sufficiently large, and a lot of careful thought went into planning these professional surveys, but there was still an inherent bias in how they were conducted.

you’re not going to take emotion out of these answers, in fact, that’s kind of what you want. You cannot, imo, analytically fill out a survey on anything user-experience related, and for sure college classes are user dominant.

The best you can do is provide guidelines - an excellent professor is one who who explains concepts, is engaging, available for office hours twice a week, didn’t have TAs do some lectures, helped you in getting research or internships etc…

I’m thinking to help get a decent idea of students’ thoughts on overall undergraduate academic quality, we (they…) could survey them about their thoughts on things like rigor, class availability, quality of class discussion, access to and interaction with profs, access to research opportunities, quality of academic support, and overall satisfaction with the quality of the faculty/teaching/academics.

So I did “contact” USNews to recommend replacing the counselor academic rep survey with this student survey. And to make sure the university/college officials understand their job in regards to properly rating other schools’ undergraduate academic quality: don’t base it on the grad school, don’t rate schools you have no idea about, etc.

I’m not entertaining fantasies of their ranking formula manager(s) ever seeing the ideas, but at least I tried. :slight_smile:

@prezbucky. are you suggesting US News may not be entirely caring, engaged, and approachable?

@tk21769 No, that’s an alternative fact here :stuck_out_tongue: