<p>hawkette-
I read the article linked in your post #61. The author is trying to make a case for enhancing graduate programs and faculty research at Emory. The article was published on the Academic Exchange, which, I believe, is not a refereed journal reviewed by professionals.</p>
<p>The author did basically the same thing I did. He correlated peer assessment with measures of graduate program strength and compared the correlations with measures of undergraduate program strength. He concluded that the correlations between peer assessment and graduate/research strength were higher than correlations between peer assessment and measures of undergraduate strength.</p>
<p>However, he is wrong about this. He is incorrect. My analysis showed that peer assessment was very highly correlated with measures of undergraduate strength. The author said he didn't find any correlations between PA and undergraduate strength higher than .67 but I found a correlation between PA and SAT of .79.</p>
<p>The author also commits a fundamental error in the conclusions he drew. He suggested that his correlations implied a causal relationship between PA and graduate ranking but he can't draw that conclusion. It could simply be that PA is an index of undergraduate strength but graduate/research strength and undergraduate strength tend to occur together. So, PA could reflect undergrad strength but grad/research strength is tied to undergrad strength. Grad and undergrad strength are inseparable.</p>
<p>His conclusions clearly show a lack of objectivity, lack of research skill, lack of understanding of the scientific method. He was biased.</p>
<p>Why did he fail to find a higher correlation between PA and measures of undergrad strength? It could be due to an error in statistical methodology known as "truncated range". He only chose the top 30 schools whereas I used the top 100 or so. Truncated range can result in underestimated correlations.</p>
<p>Why did he find such a high correlation between PA and graduate program rank? As I recall, the author used a sort of composite measure of graduate program rank and obtained a correlation of about .83. If I had used a composite measure of undergraduate program strength, my correlation between PA and a composite measure of undergrad program strength would be about .9.</p>
<p>By the way, the correlation between PA and UNDERGRADUATE college rank in US News is .91, which is very high considering it is weighted 25%. A correlation of 1 would be perfect so .91 is almost perfect. This is higher than the correlation found by the author of the article referenced in post #61 between PA and GRADUATE program ranks.</p>
<p>but a high correlation between PA and undergrad rank should be high given the 25% weighting -- no other factor has a higher weighting for the USNWR rank -- it is THE one proxy for the USNWR ranking if you could only take one number to extrapolate the rankings. </p>
<p>the relatively lower correlation between SAT scores and PA goes to show that there is serious discrepancy IMO.</p>
<p>
[quote]
The author said he didn't find any correlations between PA and undergraduate strength higher than .67
[/quote]
</p>
<p>He did say that selectivity correlated .837 with PA. He did not indicate all the other undergrad characteristics he investigated. So his results may not be that different from yours.</p>
<p>Note, Collegehelp, that he looked at USNews graduate program RANKINGS, not the NUMBER of graduate students, which was the hypothesis of this thread. This is essentially a PA for graduate programs. It hardly seems surprising that the graduate program rankings would correlate with the undergraduate program rankings. The grad rankings are based on what faculty think of the graduate programs at these universities. This is likely largely determined by what they think of the faculty. Since the same faculty teaches grad and undergrad, one would expect the by-program rankings for undergrad and grad to be correlated. </p>
<p>Since Hicks computed average grad program rankings, he in effect estimated the reputation of the university faculty across all fields. The result would be expected to follow the undergrad PA.</p>
<p>From the grad student numbers provided by Hawkette, it looks like a high correlation between grad ranking and PA, but a much lower correlation between grad student numbers and PA.</p>
<p>His methods certainly leave his results vulnerable to restriction of range, but for his purposes- comparison of Emoryto other elite colleges, this made sense. If he had included lots of much lower ranked places, he would have been open to criticism that Emory should not be compared to such colleges.</p>
<p>In short, he says that a university cannot be much better than its faculty. High faculty reputation attracts top students. He makes a pretty good argument that in order to raise its overall reputation Emory will have to improve its graduate programs. But the only way to do this is to raise the reputation of its faculty.</p>