<p>
[quote]
But again, it only gives us a part of the equation. You can't just say, "X school has top-10 faculty productivities in XYZ departments, so XYZ are top-10 overall," when that isn't necessarily true.
[/quote]
</p>
<p>Thats exactly what I said. What are you trying to do, reiterate my point again and spin it as if its not a worthy sources to apply when benchmarking program to program like you would do in USNews?</p>
<p>I never made that distorted claim. I NEVER SAID "X school has top-10 faculty productivities in XYZ departments, so XYZ are top-10 overall," massive retard. Thats just going to the extra mile by stating...hmmm....one dimension must correlate to the other million different dimensions that constitute program quality. I do not believe in that, I believe its one of many critical dimensions of analyzing program quality.</p>
<p>
[quote]
And what about the parts where it just doesn't list any data? Where there are no %s?
[/quote]
</p>
<p>"If one or more variables are not used in the calculation of faculty productivity, that part of the equation is removed and the point scale reduced accordingly. So if honors are not included, the total possible score is reduced to 90 from 100. Institutions that pay for the data have the ability to reweight the variables in any category, according to their preferences. Starting with FSP 2006-07, subscribers to Academic Analytics will also have the option to obtain the complete dataset for disciplines of interest to them, so they can use the raw data as they please."</p>
<p>
[quote]
It tries to measure that in citations. Why else would they have them?</p>
<p>And if it isn't striving to include impact, then it's even more flawed. Who cares how much is produced, if none of it really matters (in other words, has an impact)?
[/quote]
</p>
<p>The quantity measure of how many citations they have is not to measure impact. Impact is extraordinarily different than how many times a research is cited for their work. My defintiion of impact is how far it exhorts change, change is variable and may be viewed differently from person to person. These are quantitative measures not qualititative measures. You might need a system such as a PA score which measures the amount of influence (yeah, you can be productive, amass huge volumes of publications and have a small percentage of them cited and have those constitute a large portion of this ranking, but how insightful and how valueable/influential are these citations? that is something totally different this survey aims to measure. It says it on the website itself, its relies on strict quantitative measures in the program review process. </p>
<p>
[quote]
That's not what I'm saying. I'm saying, why are they including the Fulbrights from 2002 to 2006 only? But for Nobel prizes, they go back 50 years? That's arbitrary. It leads to error.
[/quote]
</p>
<p>Um, yeah I've explained and I'll explain again, Ppl in this business know that nobel prize winners don't come as often as fulbright winners. A single school can have a huge pack of fulbright winners one year, may not see a single nobel prize winner in 10-15 years.</p>
<p>
[quote]
Fulbrights aren't that common, either.
[/quote]
</p>
<p>vs.Nobel prize winners? How often are ground breaking studies that reveal the fundamentals of a system we have never learned before? Nobel prizes are given out for really fundamental work that has significant value that can change a perspective of how we look at the world. Fulbrights on the otherhand are grants for international educational exchange for really smart ppl.</p>
<p>Are you saying that you are smarter than the ppl who created this survey/rank. I wouldn't want to dispute the ranking methology and dare claim it flawed when they have ppl are are professors and have pHds workin on this type of stuff.</p>
<p>This is certaintly more scientific than the arbitrary weighting system USNWR, has in place, since this is purely objective whereas USNWR has a subjective component to it, easily gamed.</p>