US News College Rankings 2012

<p>Dartmouth tops Undergraduate Teaching for the third year in a row :)</p>

<p>US News “Counselor” rankings flop with counselors.</p>

<p>[Head</a> Count - The Chronicle of Higher Education](<a href=“http://chronicle.com/blogs/headcount/u-s-news-keeps-courting-high-school-counselors/28678]Head”>Head Count: ‘U.S. News’ Keeps Courting High-School Counselors)</p>

<p>If Morse were to spend a bit of time on the CC forums when students and parents discuss the contributions of guidance counselors, he might understand how pathetic this latest change is. What is next? Polling the gym teachers and bus drivers? </p>

<p>Inasmuch as there are a few guidance counselors who have their pulse on the admission world, their number must be incredibly small. And, of course, it is very doubtful that more than a few have a knowledge that extends beyond the local and regional colleges. </p>

<p>Asking those people to rank a few hundreds universities and colleges is simply asinine. If the knowledge of provosts and deans can be disputed, what is there to say about high schools counselors, who already represent the weakest link in the entire process. Is there anyone who doubts the source of the GC’s knowledge is last year’s USNews magazine, or one of the past century. On the other hand, I doubt that more than 5 percent of those GC could successfully categorize the top 30 universities and 20 top LAC in the country and place them in the correct state. </p>

<p>What a joke!</p>

<p>xiggi: You are 100% correct on this issue. I am in total agreement.</p>

<p>I think this also reflects how stupid the general PA criteria is. If you think the participation rate for counselors is low, check out the college response rate. Even more troubling, with the college response rate, it usually isn’t the President, Provost, or whatever filling it out. Rather, someone like a secretary.</p>

<p>The reason is alluded to in the article–time. 250 colleges is a lot. How can someone have the time to rank them? If it’s a big time commitment for a high school counselor than surely it would be the same for a Provost or President (if not, I would question their commitment to their job).</p>

<p>I am simply saying that the whole use of it is flawed. They get away with it by claiming it’s “Statistically significant.” It’s true, it technically is statistically significant as per the number of respondents. Yet the biggest problem which doesn’t make it “significant” is selection bias. In short, the results aren’t valid and would never pass any statistical and/or scientific test.</p>

<p>@buzzers: What do you mean they claim its statistically significant? Is there a link to what you are referring to? People throw around the term ‘statistically significant’ from stats but it doesn’t make sense without context. You mean there’s a significant correlation between ranking/other indicators and PA scores? </p>

<p>Selection would threaten the validity of the scores but it might also weed out people who would know nothing about a lot of schools and would otherwise guess/malinger the quality of some schools based on anecdotal evidence.</p>

<p>Filling out 250 bubbles probably would be a lot in one sitting but I would think they could just break it up or skip a majority of schools they don’t know.</p>

<p>Lol someone told me once that if a kid is brill enough at finance to go to stern, they shouldnt go.</p>

<p>Buzzers, the college response rate was 43%. HS counselors response rate was 13%. </p>

<p>College officials have more of a vested interest. That would explain the higher response rate.</p>

<p>I don’t think you need to completely fill a bubble to vote. Most of the public surveys just have check marks or X 's. </p>

<p>It would take me less than 15 minutes to complete the survey for research universities. Most marked “Don’t know.”</p>

<p>Another thought on the surveys…counselors probably are requested to complete the research university survey and the liberal arts college survey. Whereas, research universities only rate research universities and LACs only rate LACs.</p>

<p>For those of you looking for the printed copy of the College issue, its on sale at Costco-- 30% off retail.</p>

<p>Should anyone be looking for the printed edition, its at Costco-- for 30% off.</p>

<p>@JohnBlack and UCB:</p>

<p>It’s statistically significant in regards to the law of large numbers (e.g. >30 participants).</p>

<p>But it’s NOT statistically significant in regards to bias (that is, the results are questionable). So if you ran a “t-test” or a “p-test” (for those of you who have taken stats you would know what I am talking about), the numbers/conclusions are skewed.</p>

<p>The bias relates more to econometrics but also to regression analysis. Going through the math is pointless so I will explain it in simple terms:</p>

<p>When doing a study or even a survey, you have to do a randomized study. That is, you can’t just actively select schools and just accept the responses of those who decided to respond.</p>

<p>UCB’s point proves this–yes, more academics responded, but there are two issues:</p>

<p>First, are they really academics? Plenty of evidence suggests that secretary’s and the like respond to these surveys, not the academics.</p>

<p>And second, it also begs the question of the individuals that DO respond. Think about it, this is a huge long survey. With only 13% of counselors and 43% of “academics”, clearly the majority of individuals don’t have time for and/or don’t know how to correctly respond.</p>

<p>Which means, those that DO respond have a different mindset (e.g. they are more biased in their opinion, they may have just passed it on to someone else, they could have just filled it out quickly, they could have simply just ranked competitors lower and then just put IDK on the rest, they could have just copied rankings from last year, they could have been predisposed to graduate programs, etc.).</p>

<p>This creates something in statistics known as “selection bias” (and it’s subset sampling bias) and is a huge problem when drawing conclusions about the broader population (you could argue there are other subsets of bias including publication bias, confirmation bias, exclusion bias, etc. but the same flawed statistics applies). This is especially true given that this data is used on a broader scale to rank colleges.</p>

<p>The results create distortions and questions the validity of such methods. This applies to BOTH the high school and academic survey. The number who respond doesn’t matter because the process is inherently is biased/statistically flawed.</p>

<p>So that fact that colleges have a “vested interest” or that certain groups/outliers are “Excluded” it a type of bias. Indeed, for people who also put “IDK” on a lot, that means a lot of schools potentially are being rated by very few individuals (it’s like you have people rating schools like Harvard and really low tier schools but everything in between is skewed. A different number of people are responding to different colleges).</p>

<p>This is why I have a problem with the PA score and the high school counselor ratings. Both fall into this trap (of course, the people at USNWR have no need to address this because in general, the average person is really bad at statistics).</p>

<p>Buzzers: An acceptable sample or large n doesn’t mean something is statistically significant. That conclusion is test specific (like a t-test, correlation, regression etc.). Randomization only helps control for factors affecting the dv (PA results in this example) but that method would limit the sample size in survey methodology. In this case randomization would do more harm than good. You want to gather as much data as possible.</p>

<p>Are there flaws with the survey methods of PA/HS counselor surveys? Are these flaws jeopardizing the external validity of the study? Yes.</p>

<p>Still, they are informative and a 40%+ percent response rate is impressive.</p>

<p>What evidence is there that secretaries fill out this survey?</p>

<p>@JohnBlack Central Limit Theorem is a prereq. for something to be statistically significant. Sorry, should have clarified (CLT as it relates to the law of large numbers).</p>

<p>Randomization would never do more harm to a test than good. I don’t know any statistician who would agree with that. It’s pretty easy to randomize a test in this instance, especially high school counselors or the people that you deliver the test to.</p>

<p>The point is that data doesn’t matter if it’s inherently biased. Randomization solves biasness and can still generate over 30 respondents (give me a break on randomization leading to less participants. You have about 1769 people and 300 people respond, respectively, you seriously think randomization would lower this to a point lower than 30? Heck, if that’s the case, then the sample size is already too small).</p>

<p>My point is that you can find 40% impressive, but keep in mind the people responding already skew data results. It’s akin to making the judgements and opinions of people on college confidential as an extrapolation of the entire nation. It’s silly and inherently biased.
(See urbandictionary for hilarious and accurate definitions of college confidential).</p>

<p>@UCB:</p>

<p>Here you go:</p>

<p>[News:</a> Reputation Without Rigor - Inside Higher Ed](<a href=“http://www.insidehighered.com/news/2009/08/19/rankings]News:”>http://www.insidehighered.com/news/2009/08/19/rankings)</p>

<p>What’s sad is that the actual chancellors, president’s, or whatever who do respond also have their own bias which proves my point above:</p>

<p>To quote the article:</p>

<p>"The presidents and/or provosts of 15 of the 18 universities rated their institutions “distinguished,” from Berkeley (no. 21 on last year’s list) to the University of Missouri at Columbia (No. 96).</p>

<p>At Berkeley in 2008, the chancellor rated other “top” publics – including the University of Virginia, the University of Michigan at Ann Arbor and the University of North Carolina at Chapel Hill – “strong.” However, he rated all of the University of California campuses “distinguished,” with the exceptions of Santa Cruz and Riverside, which were also “strong.” (Merced was not on the list.)"</p>

<p>I’m assuming you mean creating random exclusion criteria? Like only contacting a fourth or a half of schools? How would that help? With large data sets you’d move towards the same trends/proportions that would be in a complete set. Only now you’d have less n to subdivide, exclude for missing data which gives you less power for your tests. So yes, if that is what you mean - randomization would do more harm than good.</p>

<p>You might be referring to national surveys that ‘randomly’ call up/contact people? They have to do that since they can’t reach the entire population (the US population). The USNews, however, can reach their entire population (all colleges and universities). A random sampling would not help over their current method.</p>

<p>Every methodology has its flaws, nothing is perfect. Again, “Selection would threaten the validity of the scores but it might also weed out people who would know nothing about a lot of schools and would otherwise guess/malinger the quality of some schools based on anecdotal evidence.”</p>

<p>If you don’t think the scores are valid enough, don’t follow them.</p>

<p>I am not saying to do either. I am not saying to do the entire population, but rather have a sample of colleges and universities that are random (s–>S…that is, sample can represent population if random).</p>

<p>I am not saying that that’s easy to do. It isn’t. But the current methodology has clear bias. The posted article has good examples of those biases.</p>

<p>Very confused about what warrants a 6 point difference between Princeton and Columbia. I would say C has passed both Y and P to be 2nd.</p>

<p>Well, there is the problem that Princeton is more than twice as rich as Columbia is even though it’s a third of the size.</p>

<p>Quote:</p>

<p>"First Year Experiences</p>

<p>Orientation can go only so far in making freshmen feel connected. Many schools, such as those below, now build into the curriculum first year seminars or other programs that bring small groups of students together with faculty or staff on a regular basis.</p>

<p>In spring 2011 we invited college presidents, chief academic officers, deans of students, and deans of admissions from more than 1,500 schools to nominate up to 10 institutions with stellar examples of first year experiences. Colleges and universities that were mentioned most often are listed here, in alphabetical order."</p>

<p>Ranking: [First</a> Year Experiences | Rankings | US News](<a href=“http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/first-year-experience-programs]First”>http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/first-year-experience-programs)</p>

<p>Bowling Green WOOOO Bowling Green is on the list BOOYAH</p>