A Review of the USNWR Approach-What is Valuable?

<p>hawkette and xiggi, I am in total agreement with you two. Just wanted to reinforce that.</p>

<p>midatlmom, you say that the PA is "very useful". How can anyone give much value to something that is so subjective and therefore exposed to great bias? Doesn' that concern you? Being honest is not the issue. The issue is being OBJECTIVE. Most people have a hard time doing that. A juror is removed from a jury if they have a personal bias.The same happens with judges who can not rule in a case involving a family member, friend or business associate. Well, college administrators are not held to the same standard. </p>

<p>These university presidents are human and therefore victims of jealousy, envy, greed and everything else, just like the rest of every member of society.</p>

<p>Yeah, we've had the BCS discussion before (to the extent that the Coach's poll / ranking is similar to the Peer Assessment portion of the USNWR when it comes to the BCS ranking):</p>

<p>
[quote]
<a href="http://talk.collegeconfidential.com/...65#post3283965%5B/url%5D"&gt;http://talk.collegeconfidential.com/...65#post3283965&lt;/a&gt;&lt;/p>

<p>I don't think that anyone is questioning the level of intelligence (or resumes) of those who vote for the Peer Assessment.</p>

<p>I think a relevant analogy is the NCAA Football Coach's poll for the BCS Championship. Each coach puts in his vote, and this poll is a critical part of the BCS rankings (it isn't the entire BCS ranking, but it is a critical component, much like the peer score is a critical component in the USNWR ranking).</p>

<p>Now, there is an inherent bias in this poll. Coach's have their own agendas when they vote (whether it be to boost their own strength of schedules or boost their own conference members). That is why the OSU Football coach declined to vote in the past Coach's poll because he felt there was a direct "conflict of interest" (i.e. voting between Michigan vs. Florida). Further, one of the other criticisms of this poll is that no active D1-A head coach is going to find the time to watch and analyze every Top 25 team in the country --> in point of fact, they are rarely looking at anything but film on the upcoming opponent (e.g. Michigan's coach declined to comment on Florida's team because he just "hasn't seem them play") --> and yet, these Coach's are asked to rank the Top 25 every week.</p>

<p>The point? No one will argue that these Coach's understand the game inside and out, better than the average person will ever understand. Hundreds of hours of experience and film. But so what? That doesn't mean that these Coach's won't be affected by personal / professional bias --> they are rational people and will vote in a manner that best benefits them. Period.</p>

<p>So in much the same way, the folks who vote in the peer score will vote with their own personal bias. They can have a resume a mile long, but that doesn't give me any comfort on why their opinions matter on the relative merits of a Dartmouth vs. a University of Wisconsin.</p>

<p>
[Quote]
I'm not saying that I profess to know more about the quality of academic departments and faculty at Colby vs Bates, but it's not the people's jobs to know this either that USNWR is asking to evaluate these things, so this system is just pre-disposed to containing a lot of ignorance. To me, much of that peer assessment is heresay and prior reputation when it is no loner the reality.

[/quote]
</p>

<p>Exactly. And this is the fundamental problem I have with the peer assessment, it's not a knock against the intelligence or experience of those participating, simply speaking:</p>

<p>1) It's not their job to know the differences between hundreds colleges
2) Even if was their job, there would be an inherent bias anyway</p>

<p>Furthermore, I think another fundamental problem with the Peer Score is lack of transparency:</p>

<p>1) Who are the people actually voting? Why don't they disclose who they are, and more importantly,
2) How they voted?
3) i.e. Why don't they make these peer rankings public? i.e. who ranked them and how they ranked the colleges (i have a strong suspicion that if these votes/rankings were made public and each vote had their names attached to them - they would either decline to be involved or the outcome would be different)
4) Since there is no transparency, this is the ultimate "X" / "fudge" factor --> adding / subtracting a couple of 1/10ths of a decimal point here and there until you get the list you like (i.e. ensuring not only some variance year-over-year, but that you are effectively in control of that variance). Alex, as you say, it takes many years for schools to make substantial changes, which is why it makes absolute no sense that a school jumps (or declines) 10+ spots in any given year which has happened nearly every year this list has been published (there is a separate post on this very topic that I wrote).
WBB to discuss...

[/quote]
</p>

<p>xiggi:
Yeah, whoops, I was agreeing with bluebayou. I don't mind seeing a peer review component to rankings, but it should not be weighted so heavily IMO given all the afore-mentioned problems with how it is designed and carried out.</p>

<p>Is the SAT "objective". Are class GPAs and rank "objective". In both cases the answer is pretty much no, not really. Even class size which is an objective number has not been shown to be very important in higher education.</p>

<p>barrons,
In several posts here and elsewhere, you imply that I believe that research talent and classroom talent are mutually exclusive. That is NOT my view. However, I don't automatically assume that if one is a research talent, then said professor is automatically a star in the classroom (if he/she even teaches undergraduates). </p>

<p>A student survey would either reconfirm the academic view or it would provide a different view that would serve to more fully reveal the contributions of that professor to the undergraduate student. I really think you just want to avoid any outside opinion on professors beyond their institutional research role. That is a core difference in our views. The more that you and others represent the views of the institutions and the professors and dismiss the interests of the other stakeholders, the more clear it becomes that the views of the other stakeholders, and especially the students, need their own role in the ranking and the assessment.</p>

<p>Also, please, enough of the now 12-year old memo from the Stanford person which you regularly cite as something that carries great weight. Hauling out such ancient comments only shows a desire to maintain the status quo and ignore all thoughts and arguments to the contrary. The world changes, but God forbid that such change ever finds it way into academia.</p>

<p>If you want to get a useful opinion from California about undergraduate education, then I suggest you leave the Stanford campus and go deeply into Silicon Valley and ask the folks who funded those companies and who built those companies. Now those comments would have real value.</p>

<p>US News gives weights (percentages) to each factor to come up with a total score but it is possible to determine statistically which factors contribute most to the total score. The statistical approach allows you to control for redundancy in the information. Many of the factors that contribute to the US News total score are highly correlated with each other. For example, graduation rate, peer assessment, and SAT scores are highly correlated with each other. If you know what one tells you, you mostly know what the others tell you. Conceptually, they may seem different but in the nature of higher education they are closely linked.</p>

<p>Using data for the top 130 national universities, I did the following types of analysis:
(1) principal component analysis with varimax rotation
(2) multiple linear regression (all possible models)
(3) Pearson correlation</p>

<p>The results can be expressed in percentages, like the percentages assigned by US News except these are determined mathematically. The most important factors are counted first and you can assign a percentage to the "value added" of subsequent factors.</p>

<p>Here are some things I found:</p>

<p>SAT 75th percentile is 85% of the total score by itself.
Peer Assessment adds 10%.
Together, SAT 75th percentile and Peer Assessment account for 95% of the total score.
If you exclude Peer Assessment, the most powerful combination is actual graduation percentage and SAT 75th percentile. Together they account for 90% of the total score.
All of the other data in the US news report only accounts for a total of 5% of the total score.</p>

<p>If you are wondering what goes into the peer assessment score, retention seems to be the most important factor. It accounts for 61% of peer assessment by itself. Graduation rate and SAT 75th percentile each account for about 58% of peer assessment by themselves.
SAT 75th percentile and the percentage of classes over 50 together account for about 75% of peer assessment scores.</p>

<p>A used principal component analysis to look for patterns in the US news data and to simplify the US news data. 82% of the information in the US news for national universities is contained in four general components.</p>

<p>The first component (Academic Quality) accounts for 59% of all the information by itself. It consists of peer assessment, graduation rate data, the faculty resources, percent of classes under 20, student faculty ratio, selectivity rank, SAT scores, percentage of the class in the top 10%, acceptance rate, and financial resources rank.</p>

<p>The second component (full-time faculty and large classes) consists primarily of just two factors, the percentage of faculty who are full-time and the percentage of classes over 50. This component is 12%.</p>

<p>The third component (graduation rate over and under performance) consists primarily of graduation rate over and under performance. I think of this as a value added component. It tells you how well a college does with the students it enrolls. It tells how much the College adds to the initial quality of its student body. This component is 6%.</p>

<p>Fourth component (alumni satisfaction) consists primarily of alumni giving, but also over and under performance in the graduation rate. My interpretation is that the more the College contributes to student success the more loyal, the alumni are. This is component accounts for 5% of the total information.</p>

<p>Altogether, these four components account for 82% of all the information used by US news to do their ranking of national universities. Almost all of the data can be boiled down to these four components.</p>

<p>By the way, I dictated this post using a microphone and voice recognition software called Dragon Naturally Speaking. The wonders of modern technology...</p>

<p>"Fwiw, what schools do you believe are the peers of Caltech UNDERGRADUATE? What schools do you think Caltech consider to be peers?</p>

<p>And, by the way, the challenge was about comparable elements."</p>

<p>Xiggi, you asked why do Caltech and HMC have different PAs. One of the reasons I gave is that a different set of peers rated those two schools. Caltech is rated by research university representatives (those who rate Columbia, Stanford, Yale etc...) and HMC is rated by LAC representatives (those who rate Davidson, Haverford, Oberlin etc...). </p>

<p>And Xiggi, you should know by now that I do not believe in such a thing as measuring the quality of undergraduate education because education is a completely personal venture. I do, however, believe in the reputation of a university. Looking back on my statement above, I admit I was unclear. Clatch is not better than Harvey Mudd, but it is more reputable, which is what the PA measures. </p>

<p>"A top graduate school does not automatically make the undergraduate program a TOP undergraduate. A program can be excellent but still not what you call a top. That is the difference you REFUSE to accept."</p>

<p>I am not sure I follow you. What am I refusing to accept?</p>

<p>Alexandre, I am quite certain you knew I was not looking for a simple answer. I am also quite certain that you knew how aware I am of the different sets of academics who rank universities and LACs. Obviously I knew that the Presidents or Provosts of mega-universities such as Michigan State, Florida State, or South Dakota State University are supposedly the best assessors of Caltech. </p>

<p>What I asked was to draw a set of comparables between Caltech and Harvey Mudd, using STRICTLY quality of education at the undergraduate level, and then use the comparables to explain and justify the differences in their ... Peer Assessment. </p>

<p>This --obviously leading challenge-- was meant to demonstrate the utter weakness of the peer assessment to measure much except ... a fuzzy factor such as what people BELIEVE To be a reputation. Were you to scratch the surface and truly make an analytical comparison of the criteria that SHOULD be part of a bona fide "education quality" analysis, you may be surprised at the close parallels between Caltech and Harvey Mudd. </p>

<p>In the end, two leading engineering schools are assessed differently not so much because of a factual or intelligent analysis, but mostly because of a lack of familiarity, knowledge, or probity of the voters.</p>

<p>Peer Assessment seems to be an accurate assessment of retention rates, graduation rates, SAT scores, acceptance rate, financial resources, and percent of class in top 10%. Somehow, the peer assessment subjective "gut feeling" accurately reflects a lot of important hard data.</p>

<p>
[quote]
I don't think that that is correct, Hoedown.

[/quote]
</p>

<p>Nope, it's years since I was in stats class, but there is most definitely a formula for finding a specific percentile in any grouped frequency data. And it most definitely CAN yield a number that in units smaller than the original data (i.e. with decimals for integer data, or a number like 1431 for the SAT). </p>

<p>I don't know why the school would be using grouped frequencies for the calculation (relying on a grouped CIRP data, for example, in absence of submitted scores? I dunno), or even if it should do so, but my original point still stands--you could report a percentile score that was not a multiple of ten without it immediately being a screaming red flag that something untoward was going on. Imputation (using regression to calculate scores for people who didn't send them) could yield something similar.</p>

<p>FWIW, Alexandre, I thought at least one school president had spoken out strongly against the PA pretty early on in USNews' history. I don't have the old issue in front of me, I'll have to go look for it.</p>

<p>


I think the onus is on you to demonstrate the 'parallels' between Caltech and HMC before you assert their ranking is a counterexample to PA's validity.</p>

<p>collegehelp--that's very interesting. </p>

<p>Do you have under/over graduation rate in both factor 3 and factor 4?</p>

<p>xiggi:</p>

<p>I have fun talking about the BCS and, being a Pac-10 fan, it's obvious short-comings. (Everyone vote for Mac Brown now, to save us the tears!. :) )</p>

<p>
[quote]
Nope, it's years since I was in stats class, but there is most definitely a formula for finding a specific percentile in any grouped frequency data. And it most definitely CAN yield a number that in units smaller than the original data (i.e. with decimals for integer data, or a number like 1431 for the SAT).</p>

<p>I don't know why the school would be using grouped frequencies for the calculation (relying on a grouped CIRP data, for example, in absence of submitted scores? I dunno), or even if it should do so, but my original point still stands--you could report a percentile score that was not a multiple of ten without it immediately being a screaming red flag that something untoward was going on. Imputation (using regression to calculate scores for people who didn't send them) could yield something similar.

[/quote]
This is wrong. When one has the entire set of data (the rank order of all the scores at a school) then the 75th percentile is found by looking for the score where 25% of the students are above.</p>

<p>Of course, if one didn't have the entire list of data, but knew the distribution was of a certain form (Gaussian), and the average and standard deviation things the 75th percentile can be estimated.</p>

<p>
[quote]
This is wrong.

[/quote]
</p>

<p>Man, I swear I have memories of doing this calculation. Is my memory that bad? </p>

<p>Is it only for the median that you can do this? I thought it could be done for any percentile, not just the 50th. I know for the 50th it's this: Median =
L + I * (N/2 - F)/f</p>

<p>Where L = lower limit of the interval containing the median
I = width of the interval containing the median
N = total number of respondents
F = cumulative frequency corresponding to the lower limit
f = number of cases in the interval containing the median </p>

<p>Even if I've completely whiffed that, and you can't do it for other percentiles, is not true that they could have done imputation for nonreporters?</p>

<p>
[quote]
Man, I swear I have memories of doing this calculation. Is my memory that bad?

[/quote]
I just read the website you got it from. I'd reckon this not the common definition. They say as much in the opening paragraphs.</p>

<p>
[quote]
Is it only for the median that you can do this? I thought it could be done for any percentile, not just the 50th. I know for the 50th it's this: Median =
L + I * (N/2 - F)/f</p>

<p>Where L = lower limit of the interval containing the median
I = width of the interval containing the median
N = total number of respondents
F = cumulative frequency corresponding to the lower limit
f = number of cases in the interval containing the median

[/quote]
Looking at this, it can be done for any percentile. It does give slightly more information by showing the bias of the data for a shared middle point.</p>

<p>
[quote]
Even if I've completely whiffed that, and you can't do it for other percentiles, is not true that they could have done imputation for nonreporters?

[/quote]
I think this is likely not an issue.</p>

<p>My stats class wasn't online (god help me, it predated widespread internet use). I snagged the formula from a website because that's not the kind of thing I have floating about in my head with the useless college trivia and the budget numbers & enrollment stats I'm expected to spout on short notice....but we did discuss it in class, and I remember doing the thing on paper. </p>

<p>It was a social sciences-based stats class, though--and in my experience some of the things used for social sciences research aren't as widely used (or, in some cases, accepted) in the hard sciences & math.</p>

<p>They must be calculating the median differently than when I was a kid. I thought they did it like this.</p>

<p><a href="http://www.mathsisfun.com/median.html%5B/url%5D"&gt;http://www.mathsisfun.com/median.html&lt;/a&gt;&lt;/p>

<p>If you do it like the above you can have a median different than any particular score and different than any score that can be achieved.</p>

<p>So now take this example...
4, 8, 16, 20, 24, 28</p>

<p>I was taught the median in the above example is 18. The 2 middle scores added together and divided by 2.</p>

<p>However, now this thread has me thinking. I know this isn't conventionally done, but can it be argued that the median can be any number between 16 and 20?</p>

<p>
[quote]
Mr. Payne, this is a rare occasion where I actually agree with you entirely. Your post has been the most accurate on this particular thread, particularly regarding faculty resources. It seems like universities have managed to leap 20-30 spots in one year in this particular criterion.

[/quote]
I'm glad to have met the Alexandre standard of excellence.</p>

<p>That's a great post collegehelp,

[quote]
SAT 75th percentile is 85% of the total score by itself.

[/quote]

This is why I hate college rankings. Everything is judged by the SAT score. </p>

<p>If there is any ranking and it doesn't correlate with SAT scores, it's deemed worthless. If you post the Gourman Report, the "It's worthless! That ranking is out of this world!" comments come out from posters. Try posting Business Week, the same thing happens. People are just too obsessed with that test, and the US News is essentially that list with slight variations, as the percentages say.</p>