I don’t know why the authors found it surprising that liberal arts colleges show up high on their lists. Liberal arts has always included the maths and sciences, and liberal arts colleges have always been slanted towards academia.
@jademaster If someone had asked me to guess which schools would be on this list, I would of course have picked some of the major private research universities, but would have completely overlooked the impact that LAC 's have had on science. I would have picked some of the larger state schools over them. So I am glad the authors actually showed us with data the impact of the LAC’s. That list certainly taught me a thing or two and for that I am thankful to the authors
The methodology is based on the number of awards divided by the size of the student body. This is a significant bias which helps schools with smaller student bodies and disadvantages large research universities.
An alternative methodology would be to look at the total amount of R&D spending on science.
@Zinhead I don’t think they’re interested in which universities do ground breaking research, but where exceptional scientists studied as undergrad. Otherwise I would expect to see the UCs and Michigan ranked highly. Instead, they’re working off the assumption that undergrad education influence the formation of scientists, and who knows, it might. Or maybe exceptionally bright would-be scientists gravitate towards a certain set of schools. I personally think it’s both.
It’s funny how research expenditure doesn’t predict how successful Amherst, Swarthmore and Haverford alumni do in science. Nobel prizes may be a crap shoot and meaningless measurement in the end, but membership in the NAS is quite commendable.
@jademaster, in absolute numbers, the number of exceptional scientists who studied at good publics as an undergrad still overwhelms those from LACs.
One other problem with a per capita comparison (besides the fact that at most top publics, there’s going to be a greater diversity of abilities than at top LACs) is that publics offer a ton of preprofessional majors while LACs offer only majors in arts and science. I don’t think anyone expects a journalism, nutrition, or education major to aim for, much less win a science Nobel, yet in studies like this, they still contribute to the denominator. And those numbers don’t tell you how someone with some ability will actually fare after school.
The analysis addresses the extremely relevant aspect of over versus underperformance. From the authors, “For comparison, very good research universities . . . are outperformed by (Caltech) by 600 to 900 times.”
@merc81, you may draw a conclusion on over/underperformance only if the overall quality of the entering students at ASU and Caltech are the same, but we know that that isn’t true.
So what conclusions could you draw on over/underperformance if only a handful of students at ASU could even get in to Caltech? If you are in that handful, this study doesn’t tell you which place you would be more likely to succeed.
Re #8, That might be true, in that the analysis does not provide an answer to that question. However, institutions themselves can over or underperform, and that is reflected in the study.
It’s easier to address the question of LAC v. research university/over-performance vs. underperformance with a study that investigated the topic directly: “How the instructional and learning environments of liberal arts colleges [vs. research and regional universities] enhance cognitive development” (Pascarella, Wong, Trolian and Blaich). Quite literally, liberal arts colleges make their students smarter.
Yet another attempt at making the subjective objective. And not surprisingly, folks who like the results like the attempt and those who don’t attack it.
“folks who like the results like the attempt and those who don’t attack it” (#13)
Many on the forum are choosing colleges for themselves or helping family members choose. Whenever quality studies are available, they should be accessed for what they have to offer.
The study referenced in post 11 seems particularly relevant, in that it relates to the very purpose of higher education.
One, my post was not directed specifically to you but rather to the thread in general. Two, you are still trying to take something subjective (which is the “best” college) and make it appear objective by creating a numbered list based on some type of largely arbitrary calculation. Create different methods of calculation and you likely get a differently ordered list. So which one is better? In my experience, the answer to that question will depend on whether you agree with the second list over the first. If you agree with the second list, you will tend to find the second calculation method to be better. In the end, all we really know is the lists are different. Three, you have added another subjective determination (that the very purpose of higher education is to make students smarter – or do you think that determination is objective as well?).
The author’s rationale is the same rationale we use when we compare GDP per capita instead of just overall GDP, which would unfairly advantage large countries like India and China over smaller but more prosperous countries like UK and Japan which have more productive economies per capita.
Also the authors are very clear on what they are trying to measure. They are not saying this is a list of best universities in science or some other thing. They are measuring impact on science. So yes, Caltech has had a bigger impact per capita on science than Eastern Alabama State University according to the authors. If that metric matters to parents and applicants, then this list is informative.
“So what conclusions could you draw on over/underperformance if only a handful of students at ASU could even get in to Caltech? If you are in that handful, this study doesn’t tell you which place you would be more likely to succeed.”
How quickly people jump to misinterpret-mistaking cause and effect, not understanding psychometrics/stats. What can you conclude? Given the current admissions policies and current educational policies, correcting for school size, more students who graduated from the top rated schools won these prizes in science or belonged to certain science related society groups with memberships that supposedly reflect scientific achievements than those in lower rated schools. Now, if people start changing their behavior based on this list, and those who would ordinary not attend one of the schools from the list decide that they should due ot the list, they are not necessarily going to have a greater shot at being successful at science. If a substantial group of students change their behavior, the list will also change.
Early on, college admissions tests were used as proxy for IQ tests. Correlations between the tests and scores on IQ suggested that colleges could use the tests to select the brightest students. That is because the admissions tests had concurrent validity with IQ tests and (to some extent) predictive validity pertaining to grades. But then people started to game the admissions tests by studying/prepping and getting tutors. Those actions reduced the validity of the tests. They are no longer valid indicators of IQ or much of anything else.
That is subjective too. How do you quantify impact on science? Different authors likely would have different lists. Is one necessarily and objectively better than the others or are they just different? And who gets to decide?