Is Peer Assessment in USNWR Rankings based on Undergrad or Grad Reputation?

<p>xiggi-
The peer assessment is a subjective ranking but it is far from random; it is quite valid. It IS scientific in the sense that PA is predictable from hard data. The judgements of the raters reflect real differences among schools. Reputations are earned and deserved, and the PA raters judgements reflect that. If PA were random it wouldn't be so closely related to other, objective factors.</p>

<p>Igellar-
The link I gave you was to illustrate that the R-squared for predicting PA can be much higher than the .71 you quoted. I identified a model with a 94% R-square for predicting PA based on US News data alone. Trust me. If I get a chance, I will post it.</p>

<p>Hawkette-
Emory, Rice, Vanderbilt, Georgetown, Tufts, Wake Forest, William and Mary have closed the SAT gap on the Ivies by about 30 points in the last 40 years. It's surprising that they have not closed the gap more when you consider that the college-going population has more than doubled in the same time period. But, maybe the increase in college-going population has been more confined to lower-achieving students.</p>

<p>All these claims that the PA reflects graduate program quality are simply without basis. About 6% of the PA can be explained by NRC rankings, the rest can be explained by the data in US News. PA SHOULD be related a little to grad school quality because undergrads benefit from faculty scholarship and research opportunities.</p>

<p>
[quote]
I identified a model with a 94% R-square for predicting PA based on US News data alone. Trust me. If I get a chance, I will post it.

[/quote]
The only way to predict PA based on U.S. News data alone is to find the U.S. News scores without PA and then simply compare them to the PA. U.S. News doesn't release all the statistics it uses to compile the rankings (to my knowledge), so there's no way for you to use a different weighting system that arrives at a 94% R-squared.

[quote]
Emory, Rice, Vanderbilt, Georgetown, Tufts, Wake Forest, William and Mary have closed the SAT gap on the Ivies by about 30 points in the last 40 years. It's surprising that they have not closed the gap more when you consider that the college-going population has more than doubled in the same time period.

[/quote]
A lot of the schools you're talking about get better ACT scores than SAT scores. For example, here are some 75th percentile ACT scores:
34 Dartmouth, Princeton, Rice
33 Brown, Columbia, Emory, Georgetown, Penn, Vanderbilt
32 Cornell</p>

<p>"The peer assessment is a subjective ranking but it is far from random; it is quite valid. It IS scientific in the sense that PA is predictable from hard data."</p>

<p>There are plenty of outlying schools that get bumped or penalized by peer assessment. Come on, it's basically the same system that college football used to use to rank programs and it was so flawed, they made an attempt to correct it using the BCS. Is the BCS perfect? No way. Is it an improvement over a 100% subjective ranking? He ll ya! Why has US News not made attempts to continually improve this metric?</p>

<p>lgellar (post #60),
you miss my point. The goal of public universities in raising tuition would not necessarily be to make the cost of college more need-sensitive, but merely to inflate their notional "expenditures per student" in a way that was neutral with respect to the actual cost to students and their families, so as to boost their US News rankings. </p>

<p>And the impact could be enormous. Say a public university has 25,000 students and it raises tuition by $10,000/student. That's an annual revenue increase of $250 million---the equivalent of the annual revenue stream you'd get from an endowment of $5 billion. You then recycle that $250 million right back into financial aid, dollar for dollar refunding to each student the $10,000 in increased tuition. The cost to each student is exactly the same as before the tuition increase, but the university gets to claim an additional "expenditure" of $250 million per year in the form of financial aid. That translates (surprise!) into an increase in its annual expenditures-per-student of $10,000, possibly enough to vault the school ahead several places in its US News ranking. And it's nothing but an accounting gimmick. The school's no better off really, except for getting a reputational bump by rising in the US Nerws rankings. The students are no better off, but they're no worse off, either. </p>

<p>Sounds silly and deceptive, no? But isn't this in effect what the privates are already doing by charging much higher nominal tuitions, then offering steep discounts in the form of financial aid that they get to count as "expenditures"? To my mind, this reveals the utter bankruptcy of the US News ranking system which is stacked in favor of the high-priced private institutiuons.</p>

<p>


</p>

<p>CH, I fully understand your point about the validity of the data that stems from your GENERIC regression analysis. As such, I do not disagree with the statements above, safe and except for one glaring difference: the OVERALL correlation of ALL you data points does NOT eliminate that the correlation is a lot weaker for a number of outliers. And, in the context of the discussions in THIS forum, you VERY well know that the weakest correlation is represented by public schools such as Berkeley and Michigan. That is why their overall rankings are much loweer than their PA indexes. By burying the specific data for some schools in a meaningless hodgepodge, you're simply trying to hide the weakness of your conclusion for EXACTLY the schools that are at the heart of the conversation. </p>

<p>In so many words, while it is partially true that "It IS scientific in the sense that PA is predictable from hard data" the science falls flat on its face when looking at the hard data for Berkeley and its stratospheric PA. By the way, this is the same argument I have presented several times about Harvey Mudd. </p>

<p>Fwiw, you could take 99 extremely well correlated data points and then add a poorly correlated point, and then claim that you have a 99.5% support of your data. This is exactly what you do with your well-noted outliers. In this case, nobody cares how high the correlation of the hard data and the PA is for Harvard or Princeton ... since they are obviously legitimate. We care about the ones we feel are not highly justifiable. And, for the record, there is absolutely no way that the hard data used by USNews (when removing the PA itself) supports the level of the PA, and especially when tracking the UG solely. </p>

<p>Absolutely NO WAY! </p>

<p>
[quote]
All these claims that the PA reflects graduate program quality are simply without basis.

[/quote]
</p>

<p>That is entirely debatable!</p>

<p>

<em>Sigh</em></p>

<p>Distinguished academic programs Xiggi, distinguished academic programs!</p>

<p>Berkeley: top undergrad engineering and business programs...physical science, social science and humanities programs are top too.</p>

<p>Same can be said for Michigan.</p>

<p>
[quote]
UCBChemEngineer-
Take a look at this article from 2000, when CalTech was ranked #1 by a huge margin. It highlights a lot of the statistically dubious practices of USNews and when they do and don't use standardization to create a methodology to conform to a list rather than a list to generated by a neutral outcome

[/quote]

Yes, objective data can be manipulated more easily than subjective opinion...</p>

<p>
[quote]
<em>Sigh</em>
Distinguished academic programs Xiggi, distinguished academic programs!

[/quote]
</p>

<p>UCB, I hope you could see that your last contribution is not germane to my discussion with CollegeHelp. I am disputing his contention that the hard data of USNEWS MIGHT support the PA for Berkeley. </p>

<p>For what it is worth, I DO recognize that distinguished academic programs SHOULD contribute to a higher PA. This is, however, different from the point of contention.</p>

<p>Xiggi, when comparing data, it should be apples to apples. For example, when comparing Michigan's admit stats to a LAC or a small university with only Arts and Sciences and Engineering programs, Michigan is obviously going to come out on the losing end. Also, when you compare Michigan SAT ranges to the SAT range at a school that superscores SAT results, again, you are giving the school that superscores the advantage.</p>

<p>same goes for class size. Are we comparing apples to apples? The USNWR gives out a percentage of classes with more fewer than 20 students or with more than 50 students. What if one school has a huge chunk of classes with 15-19 students whereas another school has a huge chunk of classes with 20-24 students. Is the former necessarily better than the latter? The USNWR does not distinguish. </p>

<p>So it is important to break down the stats into numbers that get to the heart of the issue rather than loosely gauging a university with statistics that are completely useless.</p>

<p>^ Well then, I agree...Sorry...Carry on!...Haha... :o</p>

<p>I know Berkeley is comparably weak in most metrics used by USNews and is the outlier in collegehelp's analysis.</p>

<p>Alexandre, those are points well taken. </p>

<p>However, allow me to pose you a question: Do YOU consider the selectivity component expressed as a percentage of top 10% of Berkeley comparable to most of the schools ranked above it? Is a number skewed by a system of admission with numerical cutoffs (read Texas, California, or Florida) comparable or even ... useful in this regard? Are the ranks of public schools in Texas or California comparable to the typical high schools that feed the highly selective schools of the United States? </p>

<p>So, do we also pick the apples and the oranges we'd like to compare?</p>

<p>But Xiggi, your comparison is somewhat flawed. Nobody takes the class rank serious. Everybody seems to focus entirely on the SAT/ACT averages. But to answer your question, I don't think class rank is an accurate factor. I would much rather see an accurate measure of academic record...a mix of unweighed GPA, class rank and curriculum toughness. But again, that would requitre complete cooperation from universities and a sound auditing system. It is not going to happen anytime soon.</p>

<p>
[quote]
You then recycle that $250 million right back into financial aid, dollar for dollar refunding to each student the $10,000 in increased tuition. The cost to each student is exactly the same as before the tuition increase, but the university gets to claim an additional "expenditure" of $250 million per year in the form of financial aid.

[/quote]
You're assuming though that everyone applies for financial aid. The people who don't apply for financial aid will not be getting the $10,000 back. This means that you will more likely have about 12,500 students getting $20,000 back, which is a much more equitable system because the money's going where it's actually needed.</p>

<p>Alexandre-
That standardizing of the curricula, or a system that compares curricula as standardized couldn't happen, as the constitution doesn't require the federal government to provide for education, therefore leaving it up to the states. Now, if your plan was 100% in-state, it would work. In fact, when I was in high school, class rank I think was meaningful, my school graded 0-100 % scale and there was a "regents curricula" that was somewhat standardized with state wide standard final exams at the end of each regents class.</p>

<p>On an unrelated note, thank you for being the only one who agrees with me in acknowledging US News' lack of data standardization. This is a practice that is borderline nefarious so much so that I predict either it will soon change or a school will be able to prove major damage at some point and sue US News for a Ka-jillion dollars for libel.</p>

<p>I have been saying it for some time Tom. The USNWR is presenting data very much like a baker presents his bread or a butcher presents her meats. Superscored vs single score, Engineering students vs nursing students etc... There is nothing standardized about the USNWR supposedly "objective" data. It's a free-for-all. The information is not audited and universities pretty much present information any which way they please. Then, to make matters worse, the USNWR purposely targets little differences and blows them way out of proportion. I suppose that's the only way it can come up with a ranking that would please its core market, the Northeast. </p>

<p>On the other hand, I think it is possible to weigh curriculum, even if the federal government in the US is not involved in high school education curriculae. The way to go about it would be to actually evaluate students courses at the individual level and assign them with a total score based on certain criteria (AP courses, IB courses, honors classes etc...). If you have a score for each enrolled students, you can have an average score for the entire school.</p>

<p>I don't think USNews is going to change anytime soon. Institutions have gone around and around with USNews on a number of issues. Ultimately USNews answers to the bottom line of sales, not to higher education. As long as the major consumers of that issue don't know (or sufficiently care) about problems with the measures, USNews has little incentive to change their general methodology or measures--even if wiser souls believe they are flawed.</p>

<p>Want to bet a law suit will change their tune?</p>

<p>Collegehelp,
Your comment that Emory, Rice, Vanderbilt, Georgetown, Tufts and Wake Forest have closed the SAT gap vs the Ivies by only 30 points is inaccurate on several levels. </p>

<ol>
<li>The changes among this group of colleges have been quite varied.<br></li>
<li>Likewise, the Ivy colleges themselves have hardly seen a lockstep move in their SAT statistics.<br></li>
<li>Finally, many of these colleges have closed the SAT by A LOT more than 30 points.<br></li>
</ol>

<p>Here are the facts as taken from USNWR:</p>

<p>Changes Since 1998</p>

<pre><code>Change in Mid-Point SAT from 1998 to 2008 , College

100 , Tufts
85 , Vanderbilt

65 , U Penn
60 , Georgetown
55 , Columbia
45 , Emory

20 , Dartmouth
15 , Wake Forest

0 , Rice
0 , Cornell
</code></pre>

<p>Changes Since 1991 (before re-centering of SAT testing)</p>

<pre><code>Change in Mid-Point SAT from 1991 to 2008 , College

na , Tufts
185 , Vanderbilt

145 , U Penn
163 , Georgetown
160 , Columbia
195 , Emory

140 , Dartmouth
na , Wake Forest

113 , Rice
105 , Cornell
</code></pre>

<p>Maybe.</p>

<p>Law is not my field--although it seems you judge me so devoid of logic that perhaps nothing should be my field, heh, but I digress--but I wonder if universities would have a difficult time proving some measurable amount of harm was done to them. If I were Bob Morse, I'd point to Reed and say it's possible to thrive even when the rankings are based on proxies that are by nature inaccurate and make the methodology unflattering to the institution. It'd be an odd defense, to say that your own ranking of an institution was deeply flawed, but it'd be one way they might go about it.</p>

<p>Don't get me wrong--I think there are many things about the ranking and measures that are problematic. I'm just not sure USNews is legally vulnerable for it. I'd be interested to hear the angles, though.</p>

<p>
[quote]
I have been saying it for some time Tom. The USNWR is presenting data very much like a baker presents his bread or a butcher presents her meats. Superscored vs single score, Engineering students vs nursing students etc... There is nothing standardized about the USNWR supposedly "objective" data. It's a free-for-all. The information is not audited and universities pretty much present information any which way they please. Then, to make matters worse, the USNWR purposely targets little differences and blows them way out of proportion. I suppose that's the only way it can come up with a ranking that would please its core market, the Northeast. </p>

<p>On the other hand, I think it is possible to weigh curriculum, even if the federal government in the US is not involved in high school education curriculae. The way to go about it would be to actually evaluate students courses at the individual level and assign them with a total score based on certain criteria (AP courses, IB courses, honors classes etc...). If you have a score for each enrolled students, you can have an average score for the entire school.

[/quote]
</p>

<p>Again, you present a number of interesting observations.</p>

<p>However, while USNews deserves criticism (and I believe that the nature of this criticism will not be universal) for its shortcomings, should we not also applaud the effort (often pioneering) to present arcane data in a well-organized manner. Should we not also remember that the USNews might very well have been the driving force behind the creation of the Common Data Set organization. Fwiw, I believe that most students and parents would MUCH rather prefer to spend a few bucks on the USNews products than slave through the NCES or IPEDS listings. For 10 or 15 dollars, there is simply NOTHING that comes close to USNews price/quality/reward ratio. </p>

<p>This said, I wish they would expand one category, and that is the notorious Peer Assessment. Expand by developing and DISCLOSING the sub-categories but also unpoison the objective rankings by decoupling the PA from the rest of the data. Divide to conquer, so to speak.</p>

<p>As far as auditing, I'm not sure if that is needed for objective data. On the other hand, the subjective data (oh, we know what it is) would greatly improved from being made ... open to all eyes. No need to audit, but a great need for added transparency. Unfortunately, as I wrote before, that is wishful thinking of the highest order. One of the biggest attraction of the secretive "nallots" remains the avowed ability to help your friends, punish your foes, and most importantly remain able to answer questions without rhyme or reason. </p>

<p>Turning to the HS curriculum, isn't that where the College Board is mostly active. It is obvious that one of Gaston's most cherished wishes would be to orient the K-12 education even further. Looking at the development of the AP boondoggle, one can only shudder when weighing the results of allowing testing companies to orient the debate of curriculum. The mere fact that an obscure program such as the IB could become so popular in the United States show the affinity of our educators for gimmicks as opposed to realize how bad the BASIC education has become in the United States. Our problem is not so much about curriculum than it is about increasing the time spent in school and finding a way to bring competent and dedicated teachers in front of the students. </p>

<p>And, as a final comment, publishers are "trying" to orient curriculum through "measurements" with ... equally disastrous results. To see the depth of the lunacy, just read Newsweek and the moronic "rankings" created by Jay Matthews.</p>