What is the best way to judge the quality of a college?

<p>

</p>

<p>You are assuming in this version of your argument that every Ivy kid has better stats than every kid who goes to a state school, and that’s just not true. I guarantee that every state flagship has kids that could have gone Ivy, but chose not to, for whatever reason. The talent pool is a lot deeper than you seem to be implying and some are doing what you say in another part of your argument, being successful regardless of where they go. Except I would argue there’s a limit to how low you can go before encountering barriers to your success.</p>

<p>Let’s put it this way - going to HYP doesn’t guarantee success, but the margin for error narrows considerably the further down the list you go.</p>

<p>Torveaux, I agree it would be better to measure how far a school “moves the needle”. How exactly do we do that?</p>

<p>Actually, outside of MBB consulting and maybe Wall Street (though Wall Street is democratizing), you have to go pretty far down the list before your margin of error narrows. CA, TX, MI, VA, NC, IL, IN, GA, and WI all contain publics or divisions in those publics that most people consider stellar, and those states contain over half the population of this country. Add in FL, MD, MA, MN, WA, IA, PA, and NM, which contain publics that are very good in at least one field or aspect, and you’re talking about a majority of the country.</p>

<p>Actually, @MrMom62, what type of school did you attend? Because as an alum of an elite private who has known people who’ve gone to HYPSM as well as good publics, it’s been my experience that the individual matters far more than undergrad schooling. Especially since, if you go to a top 15 b-school or T14 law school, your credentials “reset” to that level. Plus which, there are ways to get that Ivy credential later. Getting in to certain grad schools in Columbia, Dartmouth, or even Harvard (as well as many other private elites) just isn’t that hard, and you can participate in MBB or Wall Street recruiting then. In fact, that’s what a sales guy who use to cover us did. He went to some forgettable school for undergrad, then went to Teachers’ College@Columbia and snagged a spot at Goldman.</p>

<p>Sorry, I should say the individual and circumstance/luck matter more. But it’s not like there is sure to be more luck at an elite private. I mentioned it before, but a bunch of guys from my HS went to our public state flagship to study CS, made connections, then went out to Silicon Valley and got rich. Folks their year who chose to go to elite privates to study CS did not get in that group and thus did not become fabulously wealthy.</p>

<p>

</p>

<p>That’s like assessing the quality of a toaster by the quality of bread going into it. </p>

<p>The quality of a school and the quality of the students going into the school should be assessed independently. As ucbalumus pointed out, some schools may be less selective but still offer a high quality, rigorous curriculum.</p>

<p>

</p>

<p>Actually, just by examining what is in the customers’ shopping cart, analysts may be able to make fairly good predictions about what else they own and what else they are likely to buy. Good, expensive bread may very well be likelier to go into good, expensive toasters. That would be a correlation not a causal relationship. </p>

<p>I agree though, it would be desirable to assess the quality of the school independently and not rely on either a single crude indicator like SAT scores or even composite rankings. How exactly should we do that? Consider a high-achieving, first gen kid who knows next to nothing about colleges. How does he get started on the task of assessing thousands of colleges for academic quality, without using either college rankings or individual metrics like SAT scores? If SAT scores aren’t a good metric, what’s a better one? </p>

<p>The best alternative I know is to use an online college matcher to filter based on location, cost, and other personal “fit” factors. Then for the resulting schools (which still may not be a very short list) start the laborious process of running the NPCs, examining the CDS information, visiting, talking to students and professors, asking questions on College Confidential, etc. Still, you probably want to include your GPA and test scores in that initial filter, unless: (a) you believe there are “hidden gems” out there with much lower average stats than yours, (b) you don’t want to exclude them either from your initial filter nor from your follow-on research, (c) you have a reliable quality-assessment approach to identify the excellent (but less selective) schools.</p>

<p>You could look at outputs instead of inputs. Yes, there is high correlation, as the inputs have a big effect on outputs, but it is still more informative.</p>

<p>It’s a lot easier if you know what industries you might be targeting (or can put probabilities on whether you will pursue them). For pre-med, I’d avoid the top publics and privates that curve their science classes as well as directional schools, but it mostly comes down to your MCAT, GPA, and total package. Also, Harvard and Rice have greater than 90% acceptance rates, but keep in mind that many school “game” to keep their official med school acceptance rates around 80%. For pre-law, LSAT and GPA are mosly what matter, so go where you can get a good education at a good price (like a LAC). For Wall Street, look at Street targets and semi-targets. Same goes for MBB consulting and targets. For tech startups, you can look to see which schools have the most entrepreneurs. Stanford does best. Then Cal, Harvard, MIT, Cornell, Illinois, Michigan (maybe a few others). Rankings in a field may be useful as well. For instance, the top 5 CS schools (Stanford, Cal, MIT, CMU, and Illinois) all have pipelines to the top software companies. A bunch of others as well. For engineering, the best engineering schools would certainly get respect, etc.</p>

<p>Outside those considerations, which college to attend should come down to fit, geography, and price, and success in life is mostly due to you, not your school in this country.</p>

<p>Great bread in a crappy toaster still makes for a decent piece of toast. The best toaster in the world can’t help a bad slice of bread.</p>

<p>@Chardo:</p>

<p>Except that you are the bread, so you don’t get to choose that. Only the toaster.</p>

<p>Also, something like CC makes rankings much less necessary than 20 years ago. You can get a wealth of information about schools that fit your goals.</p>

<p>This is a really interesting question. As a family of academics who have 1 in college and another starting to look, we look at quality of institution in terms of academic offerings and academic opportunities. So – first and foremost, quality of the faculty – where are their Ph.Ds from (as academics, and with friends who are academics in different disciplines, we are fortunate to have a rough sense of which are strong programs in core disciplines – they are not always the US news top 20 schools, for instance, Rutgers and Pitt Philosophy or Univ Cincinnati Classical Archaeology). Among those faculty, are they active in their disciplines? Publishing and presenting at top journals, conferences? </p>

<p>Then, what are the opportunities for student engagement – undergrad research, small upper level classes etc. Selectivity and admitted student test scores are not really on our list of priorities. As neither we nor our kids are STEM types, we aren’t concerned about acclaimed faculty who may be running labs rather than lecturing – top notch History, Poli Sci, Philosophy profs are generally in the classroom during the year unless they are on leave. They may teach only upper level courses and skip the intro level, but that’s not something we have a concern about as long as students are able to take classes from them at the upper level. </p>

<p>Another consideration that matters to us is administrative effectiveness. We lost interest in our instate flagship when we encountered repeated examples of ineffective communication and bumbling bureaucracy, not just in admissions but in substantive programs. Other flagships were not that way, and now that we have an enrolled student at another public flagship, that attention to detail and efficiency is apparent in the way the entire institution runs. All things being equal, I would rather deal with an institution that is well-organized and works. Now that is NOT a way to judge the overall quality of an institution, but it certainly can matter to parents when dealing with Bursar etc. </p>

<p>

</p>

<p>A student doing an initial college search would use his/her GPA/rank/scores to compare with schools’ admission selectivities in order to find out which schools are realistic for him/her in terms of admission. No sense in spending a lot of time researching schools where there is no chance of admission. It may seem weird to some posters here who are fixated on elite-selectivity colleges and disdain all other schools, but there are students whose first choice is a safety*, so they need to just put in one application and be done.</p>

<ul>
<li>A real safety with 100% chance of admission and 100% chance of affordability, not a near-safety with subjective criteria or holistic admission that some posters refer to when they write “safety”.</li>
</ul>

<p>

This. An old saying in the football coaching field: “A great coach can beat you with his players, then switch players with you and beat you with yur own players.” College is not unlike that. It’s not how great the players are (coming in), but how great the coach is… that is, how much better the players are after he has coached them.</p>

<p>The greatest changes that occurs with students during college come from DIRECTED exploration. Anybody can be self taught. Many large Universities with esteemed faculty are heavily weighted, at the undergraduate level, toward unguided self learning. Which Universities are the most hands on by faculty? That is expensive, but it is also the most effective way to facilitate real learning.</p>

<p>

</p>

<p>Really? If you are interested in anything other than the Top 200 schools or so, and even not all of them, CC is a terrible place to find out that info. Many of the individual college boards have very few posts and questions go begging for months. Even parents who are experienced in the college game find this an intimidating and often misleading place. </p>

<p>Can it be useful? Yes, but the ratings probably feed the activity here more than anything, as they make people aware of schools that before the ratings would have been completely off the radar. I can recall looking at Barron’s 35 years ago and wondering why most people would even want to go to most of the schools in the book, they sounded so small and obscure. Now people know why they want to go to them and are seeking more information than ever.</p>

<p>@MrMom62:</p>

<p>If you’re looking outside the top 200 or so schools, the rankings are going to be of even less help. There likely will be big differences between school #200 and school #225, but not in terms of perceived national cachet (neither will likely have much of that) and much more in terms of fit and local reputation.
Just for kicks, I decided to look at schools #200 and #225 on Forbes’ list.</p>

<h1>200 is Ohio Wesleyan</h1>

<h1>225 is Auburn</h1>

<p>Two very different schools, and if you’re deciding between the two based off of some ranking, you have a fundamentally misguided view of higher education in this country.</p>

<p>

</p>

<p>Again, were talking about the quality of the SCHOOL, not the student. </p>

<p>And FWIW, the toaster has no control which bread it let’s in. </p>

<p>Outcome-oriented metrics are great in theory. Unfortunately, there is no Common Data Set for outcomes. The existing CDS tells us little or nothing about post-graduate outcomes. The outcome metric that I like best (despite its problems) is PhD production, for the following reasons: </p>

<ol>
<li>It measures an outcome that is directly related to the academic subjects colleges actually teach.</li>
<li>It measures an outcome that is plausibly related to the treatment effects of undergraduate education. Colleges with the highest PhD production rates are not necessarily the very most selective colleges. Colleges with the highest rates do seem to share certain features in common (such as small average class sizes and a reputation for academic rigor, which in my opinion have a plausible bearing on this outcome).</li>
<li>It measures an outcome that, in most cases, occurs within 10 years of college graduation (unlike lifetime achievements such as senior leadership positions or Nobel prizes, which presumably reflect the effects of many factors besides undergraduate academic quality)</li>
<li>It can be measured either for broad interdisciplinary areas (e.g. science and math), or in many cases for individual disciplines</li>
<li>Its data sources are not sparse (unlike Nobel or Rhodes production data, for example)</li>
<li>The NSF has collected many years of data and made it searchable on the internet at webcasar.com.</li>
<li>It can be expressed easily as a number that has clear, objective meaning (unlike, say, “faculty quality”, “student engagement”, or “administrative effectiveness” – all of which are admittedly very important)</li>
</ol>

<p>Problems with this metric:</p>

<ol>
<li>The NSF reports baccalaureate origins of PhDs in absolute numbers, without normalizing either for institution size or program size. Consumers/researchers who make these adjustments can’t cite authoritative sources and methods for their adjustments.<br></li>
<li>Although the outcomes can be attributed plausibly to treatment effects, these effects cannot easily be separated from confounding selection effects of student choices. An excellent college may have low production rates simply because many of its students choose other career paths. It may enroll many students in engineering, nursing, or other fields that do not frequently lead to doctoral degrees. Socioeconomic selection effects may be confounded with academic treatment effects.</li>
<li> The percentage of PhDs earned in strong v. weak programs cannot easily be measured and compared across colleges.</li>
<li>It isn’t an outcome that many people strongly desire for themselves or their children </li>
</ol>

<p>I just graduated UCLA and am going to Stanford for med school next year… From what I read on this thread I wanted to share my experience because I actually did partially base my college decision on some irrelevant factors.</p>

<p>In retrospect, I wish I had made my decision focused on: scholarships, programs the school offers (proximity of research centers, medical centers for premeds), unique student organizations (ie only UCLA offers a mobile clinic which is a great premed org), counseling for your student’s interested career (ie UCLA premed counseling is nonexistent), and other career related factors. I wish I had not factored in: campus aesthetics (how “good” the campus looks because when I was at UCLA on a daily basis… no matter how good the campus looked – I just didn’t care. Bottom line, the good campus aesthetics didn’t improve my day-to-day UCLA experience whatsoever), food quality, dorm quality, and greek life/party scene. Even if the food is bad (UCLA food is #1 though), dorm’s are shabby (UCLA is avg on this), and there’s nothing to do on Saturdays (UCLA’s party scene is great though the same can be said about other city located colleges like USC), I found from my experience and my friends experiences (attended USC, Princeton, Columbia, UCSD) that most college students can adapt or find ways around these things easily. Conversely, some student organizations, programs, or career opportunities can be highly unique to some schools and are invaluable for career development (ie UCLA has no premed counseling).</p>

<p>Self-reported satisfaction is not a terrible “outcome-oriented metric.” </p>

<p>^ What’s a good approach to measuring student satisfaction (self-reported or otherwise)?
Forbes uses student evaluations from RateMyProfessors.com (17.5%), freshman-to-sophomore retention rates (5%), and student evaluations from MyPlan.com (5%).</p>