Has US News Rankings Improved Higher Education Through Competition?

<p>

How many is many? How many such classes would you say a typical student will have to take in his four years at UC-Berkeley?</p>

<p>less than 3
more than 10?</p>

<p>Starbright, in your experience, who fills out the PA portion of the USNWR questionnaire? That seems to be the most contentious component of the ranking. Is the PA another tool of the marketing engine or something legit in your eyes?</p>

<p>hawkette:</p>

<p>fwiw: Dartmouth has a couple of professors who speak unintelligible English. :D</p>

<p>starbright-
I bet college faculty care about the US News rankings when they are searching for a college for their own children.</p>

<p>The most important thing to know about a college: Does it have your intended major?
The second most important thing: How high are the average SAT scores?</p>

<p>

Do explain. I’m brimming over with anticipation.</p>

<p>SAT scores are correlated with almost everything that is good about a college…</p>

<p>

I’ll do some analysis and get back to you.</p>

<p>EDIT
Wow. I may not always agree with collegehelp, but in this case he(?) is dead-on. SAT Math 75%ile scores have ~0.8 correlation with a retention rate, grad rate, and research$/FTE statistic among a sample of schools with SATM 75 > 640 and at least 20 chem bachelors grads.</p>

<p>

</p>

<p>Every summer, the college presidents of the Ancient Eight get together in the Hamptons and come up with ways to keep the non-ivies down. Then every winter, the presidents of HYP get together in Aspen to come up with ways to keep the non-HYP ivies down. <em>rolls eyes</em></p>

<p>If the PA system is “DESIGNED” to “help the rich get richer,” who, pray tell, is the “intelligent designer”?</p>

<p>noimagination-
How did you come up with that so quickly? Did you use the IPEDS database or some other?</p>

<p>

I used IPEDS figures and a spreadsheet.</p>

<p>

hawkette,
The use of TAs at Berkeley is the exact same at a school like Dartmouth:
<a href=“http://talk.collegeconfidential.com/1062255044-post11.html[/url]”>http://talk.collegeconfidential.com/1062255044-post11.html&lt;/a&gt;
<a href=“http://talk.collegeconfidential.com/1062255443-post12.html[/url]”>http://talk.collegeconfidential.com/1062255443-post12.html&lt;/a&gt;&lt;/p&gt;

<p>(ie TAs lead discussions and labs - not the lectures)</p>

<p>In fact, at Berkeley, it’s better than what slipper claims happens at Dartmouth…at Berkeley, profs teach Math 1A - not TAs.</p>

<p>I don’t understand why you think TAs at a school like Berkeley are unintelligible. From my experience, I didn’t have any problems with my TAs. These students are top apprentices of their fields at one of the best graduate schools in the world. You yourself have claimed you learn best from your peers…well, a grad student is a closer peer to an undergrad than a crusty, old prof.</p>

<p>

</p>

<p>What does a Peer Assessment score actually measure? What objective basis does a peer have for the number s/he submits? The PA scores likely reflect, to some extent, the volume and quality of research production (journal articles, etc.) in the peer’s own field. I’m afraid they also are likely to reflect a certain “halo effect” of the overall reputation, based on hearsay or previous rankings. Probably the best objective evidence a peer has for the overall quality of instruction (not scholarship or reputation) is the quality of recent graduate students arriving from the target school. However, in most cases, these students will be very few in number and not a representative sample.</p>

<p>We simply do not have any reliable, universal metric for the quality of undergraduate instruction. And we’d get a more objective, up-to-date measurement of scholarly research output by replacing subjective PA scores with scores computed directly from the measured volume of publications and citations. That number could be combined with class size measurements to begin to convey (imperfectly) the quality of instruction, to the extent that good instruction may tend to come along with highly productive scholars in a small class environment.</p>

<p>Inside Higher Ed had a very informative article on March 15 about the increasing impact of rankings on universities. ([Views:</a> Ranking Confession - Inside Higher Ed](<a href=“http://www.insidehighered.com/views/2010/03/15/baty]Views:”>http://www.insidehighered.com/views/2010/03/15/baty)) The author said that because of that, there is an obligation to make them as rigorous and balanced as possible. The area of most concern was the PA score and how subjective and biased it is. Times Higher Ed is reworking their ranking methodology to mitigate the influence of inferior data regarding PA and he summarizes their new approach in his article. They rank international universities. USNWR should take note and rework their American college ranking system.</p>

<p>^^^^Yet UCB has a very high PA score and it is ranked way too low by the Times. So I assume by eliminating that criterion it would improve Cal’s ranking?</p>

<p>“We simply do not have any reliable, universal metric for the quality of undergraduate instruction.”</p>

<p>If a good way could be found to collect the data, something objective (if not reliable) is out there: </p>

<p>Compare normalized SAT/ACT freshman entrance scores to normalized GRE/LSAT/MCAT/etc. senior exit scores. The greater the difference, the more a school is doing for its students. Applicants could then target schools with with appropriate entrance scores to try to maximize their improvement during the undergrad years.</p>

<p>The value of test scores is clearly limited, but they are objective and somewhat universal.</p>

<p>^That’s like giving teachers merit pay based on how much their kids improve in a year. The fact of the matter is kids in the 99 percentile of SAT scores don’t have that much farther to go, but kids lower down the line do. That would disadvantage colleges who accept the already brilliant.</p>

<p>^ No, because schools that accept the already-brilliant are being compared with each other, not with schools with lower entrance averages. That’s why it’s important for applicants to target schools with with appropriate entrance scores.</p>

<p>It’s easy to remove the fact that some schools are closer to the top statistically, however, what’s more important about that kind of measure is that it assumes the GRE/LSAT/MCAT are indicators of successfully having done something in college. The MCAT you may have a decent case for because it tests knowledge, but most students could take the GRE and LSAT without going to college and do essentially as well, so long as they picked up a book and studied for it.</p>

<p>Analogies and pre-calculus level math on the GRE and logic/brain teasers on the LSAT are not what I trained to do in college and are not good measures of what I did with my time. They’re not good even as general measures for what everyone does with their time.</p>

<p>If such a test existed, and it was administered in freshman year and senior year, then you’d have a case. There are some researchers, like Kevin Carey, who have been calling for this, but the data that exists right now is no good for this purpose.</p>

<p>“but the data that exists right now is no good for this purpose.”</p>

<p>Ok, if there are data showing that GRE/LSAT/etc. scores are essentially the same at college entrance and exit time, I’ll agree.</p>

<p>Yes, we should in principle be able to measure the effects of a relatively good instructional environment, but I’m afraid no pair of tests exists (one for entry, one for exit) that reflects the most important things students are learning. So I agree with modestmelody here. And I think that designing a generalized entry/exit knowledge testing system would be next to impossible, because we’d have so much trouble agreeing about what knowledge is most worth testing.</p>