<p>It is interesting that graduation percent tops the list. I wonder if Nobel laureates and memberships at NAS and NAE would improve the correlations for some, in particular narrowing the discrepancies for entities such as Berkeley and Chicago.</p>
<p>
</p>
<p>If I saw diplomas on the wall of my ECU, that would concern me. ;)</p>
<p>I choose my doctors on word-of-mouth from other patients and where they went to school NEVER comes up in the conversation. More to the point, if I needed a heart surgeon, I’d call people that had heart surgery and ask them to recommend doctors. And I’ll sleep very good at night using that process.</p>
<p>Would you ask someone at Lexus about the quality of cars at Mercedes or would you want word-of-mouth from those actually driving the car?</p>
<p>StillGreen-
Faculty honors might add new information to this model. There is still 7.5% of PA unaccounted for. The 10 variables I included account for 92.5% of PA.</p>
<p>The current model starts with graduation percent because it accounts for the largest portion of PA by itself. Other variablles add new information in small increments because they overlap (are correlated with) graduation percent.</p>
<p>You could start with PA or SAT, they are both big factors behind PA. But they are themselves related to each other, so the second one you add in doesn’t contribute that much new information. Adding factors in the following sequence builds up to 92.5% as follows:</p>
<p>72.7% graduation percent
+5.97% SAT math 75th squared
+8.05% freshman class size
+2.87% SAT math 75th
+0.71% public or private
+0.61% endowment per fte natural log
+0.33% yield
+0.39% admit percent natural log
+0.46% admit percent
+0.40% research $ percent squared</p>
<p>I hope that adds to 92.5%.</p>
<p>
</p>
<p>A correction is needed here. This line of thinking is rampant among a FEW members of CC. Members who have tried for years to “explain” the unexplainable and “justify” the questionable via the extreme massage of selected and self-serving numbers. And all of that for the sole purpose of serving a very concrete and narrow agenda. Only people who are pleased by the pre-established outcome of the scientific “method” find it interesting. Others know better!</p>
<p>
</p>
<p>Oh what a surprise! Aren’t the criteria and the weights especially selected to yield a perfect or close to perfect correlation between the real and projected PA of those schools?</p>
<p>
</p>
<p>Interesting. How does the SAT scores of Berkeley represent BIG factors in its PA? Among the highest ranked 50 schools (by USNews) how high is the PA and how high are the SAT scores for Berkeley? Oh yes, not all SAT components are relevant! Only the ones that “support” the cause.</p>
<p>^ Berkeley doesn’t superscore, xiggi. :D</p>
<p>^^–^^</p>
<p>That is indeed what we have been told, UCB! :D</p>
<p>On a more serious note, does any of this really, really make a difference? If (IMHO) Berkeley is the poster child for all that is wrong with the infamous Peer Assessment, the real question remains if the school is ranked too highly. Does it really matter that the survey respondent deviate from the instructions and reward Berkeley for its superior graduate schools? </p>
<p>The unfortunate part of all the discussions on the subject is that it’s easy to forget that one could have serious issues with the integrity of a process but remain happy with the outcome. </p>
<p>And, fwiw, I remain convinced that the public would be much better served by being able to read two rankings, with one that describes its subjective and non-mathematical methodology with a bit more honesty. I do not think that anyone would object to a line that says that the intangibles used to determine the PA include several elements that are NOT directly relevant to the experience of … undergraduates but directly relevant to the school overall reputation. </p>
<p>Oh well, this is an issue for the Class of 2014 to debate!</p>
<p>Actually XIGGI is the only one who seems to know better. A one man band.</p>
<p>Makes it easier not to blame my compadres, Barrons! And, speaking about knowing better, perhaps you and I should stick to debating admissions and yield rates in Madison. You never know, I could be lucky again. And if not, I’ll promise to join your drumline.</p>
<p>
</p>
<p>Of course, but the real issue IMO is that it is impossible to NOT reward every Uni with “superior graduate schools.” Even Stanford Junior University’s PA is based in part (or in whole?) on its grad programs, as is the PA for the rest of HYPS et al. Does anyone think that Caltech and MIT offer the same UNDERGRADUATE education as Harvey Mudd? Do the Nobelists at any of them even teach undergrads?</p>
<p>Of course showing how PA can be well predicted by certain variables does nothing to validate the PA.
An example is the size of the class and emphasis on research. Big negatives in many folks evaluations of what a quality undergraduate education looks like, but of course necessary to USN&WR if the big state research universities aren’t to be drummed out of the top 50 and 100. I would nominate Rice as the poster child for the university PA shaft.
At least the “National University” PA bias toward size and graduate research doesn’t feel quite so personal as the biases on the LAC side. Regionalism, political orientation, and reverence for the faded remaining “Seven Sisters” kick in big time, along with that old bugaboo, size. Nominee for the poster child of the PA LAC shaft- Harvey Mudd College.</p>
<p>The fact that the PA can be predicted so well by hard data DOES validate the PA. If the experts doing the PA ratings were picking numbers out of a hat, then the statistics would show a big zero instead of near-perfect correlation. This is exactly what validity means: one phenomenon is related to another phenomenon as expected. The PA ratings are subjective but this shows that the experts doing the ratings have derived their subjective assessment from reality, not fantasy.</p>
<p>This model isn’t perfect but that doesn’t change the basic point that the PA is valid. If I were to add more of the right variables I could improve the model.</p>
<p>And don’t forget that pure measures of undergrad education quality such as graduation rate and SAT scores do a very good job of predicting PA by themselves. PA is almost entirely a reflection of undergrad education quality. Things like research expenditures contribute very little.</p>
<p>The fact that the PA mail-in survey has a range of 43% to 60% response rate (a very good response rate for mail in surveys) over a sufficiently large sample size AND high correlation with hard data definitely helps validate the PA aspect of the rankings… </p>
<p>The only reason universities boast USNews in their publications and brochures is because it’s like the NRC peer reviewed rankings, except that the USNews is not 15 years old like the NRC rankings :-P</p>
<p>Everything in academia is peer reviewed these days. Committees of renown physicians and scientist are charged through countless research grant proposals for project funding. Federal research grants are highly competitive peer reviewed competitions for research money :)</p>
<p>
</p>
<p>Is it one thing to repeat the same argument, and quite another to recognize that the entire explanation provided above does NOT work very well for the cases some of us have pointed out over and over. Your own examples include Berkeley where the “main” ingredients of your formula (graduation rates and SAT) do not support your conclusions.
Of course, we also know that another example (always ignored, as we know) is the difference between schools such as Smith and Harvey Mudd. Actually, in THIS case, your SAT prediction could not be more wrong since Harvey Mudd shows both the highest SAT and one of the lowest PA among highly selective LACs. </p>
<p>The fact that your correlation works in the case of Berkeley is more a testament of “picking the right” variables than establishing a validation of the PA for THAT SCHOOL. Pretending that the model espouses the SAT scores to a great extent is belied by the fact that Berkely does NOT have a very high SAT score … unless a subscore was weighed to achieve the anticipated result. </p>
<p>Oh well.</p>
<p>Xiggi,</p>
<p>"72.7% graduation percent
+5.97% SAT math 75th squared
+8.05% freshman class size
+2.87% SAT math 75th
+0.71% public or private
+0.61% endowment per fte natural log
+0.33% yield
+0.39% admit percent natural log
+0.46% admit percent
+0.40% research $ percent squared</p>
<p>I hope that adds to 92.5%. "</p>
<p>Xiggi, collegehelp used SAT MATH SCORES, not SAT total scores.</p>
<p>Students at Berkeley have pretty high math scores. </p>
<p>Plus, SAT math scores were not the only variable either.
So a lower SAT score can be made up by the other variables.</p>
<p>I don’t get the 72.7% graduation percent variable, but whatever.</p>
<p>I found your following statement fascinating.</p>
<p>“The fact that your correlation works in the case of Berkeley is more a testament of “picking the right” variables than establishing a validation of the PA for THAT SCHOOL.”</p>
<p>When ranking anything, or establishing a “validation”, picking the right variables to get the result you want is always important, isn’t it?</p>
<p>Even if I wanted to use objective data only to prove a point, I’m still making a subjective opinion anyway, aren’t I?</p>
<p>When you state you want the rankings of US NEWS to separate objective data from subjective data, you are making a subjective decision aren’t you?</p>
<p>I keep reading about percentages all the time as if they are the valid objective data.</p>
<p>So… is this objective data? If a school has more students with math SAT scores over 750 than any other school, is that data objective or subjective?</p>
<p>That data is objective, isn’t it? Now how I use the data to compare schools is subjective, isn’t it?</p>
<p>Same as percentage of students with SAT scores over 750 is objective, and how I interpret that data is subjective. </p>
<p>So, if I think over 10,000 students at a school with math SAT scores over 700 is a very positive thing for a school and students, and you don’t, I’m not wrong, am I? And you’re not wrong either, are you?</p>
<p>Makes a singular ranking that everybody can use look pretty silly, doesn’t it?</p>
<p>Dstark, I know you’re trying to help, but you should know I would have been perfectly happy to just glance at the list of criteria used by CH and not worry in the least about how he cooked his broth. Were we supposed to analyze the reasons behind the various percentages and weights? I don’t think so! </p>
<p>Where my problem started was with the statement, “You could start with PA or SAT, they are both big factors behind PA.” Even after discarding the non-sensical part, it remains that the statement is visibly meant to establish that the SAT is a big factor behind the PA. Well, this is obviously NOT true in the case of the “close to perfect correlation” of Berkeley. CH at best uses one small subscore of the SAT (the only one that fits his purpose) but adds a misleading explanation regarding the SAT being a LARGE part of the PA. </p>
<p>Pfft!</p>
<p>
I believe there is a typo here. I presume it should read “You could start with GRADUATION RATE or SAT…”</p>
<p>
As dstark pointed out, CH used “SAT MATH scores” … UCB’s math score range is 630-760.
The correlation is not “close to perfect” for UCB … the predicted PA is 4.4, compared to the actual at 4.7.</p>
<p>Interesting debate, interesting results, and a tremendously valuable contribution by collegehelp. I’m not surprised there are a few (very few, it seems) outliers. One would expect that. But the existence of a few outliers does not in any way invalidate collegehelp’s findings; that’s just normal in any data set.</p>
<p>In this case, I think there’s a fairly obvious explanation for the two most prominent outliers, UC Berkeley and the University of Chicago—both with significantly higher actual PA than predicted by collegehelp’s model. The reason is that in evaluating their own schools and how they stack up against their peers, college and university administrators pretty much begin and end with the faculty. And no one in their right mind would dispute that UC Berkeley and the University of Chicago have outstanding faculties, easily among the top 10 (and in Berkeley’s case, probably much higher). </p>
<p>Why the narrow focus on faculty? Well, it’s what college provosts and presidents—in the vast majority of cases drawn from the ranks of working academics and deans—know best. It’s also perhaps the single factor most under their control. Yes, they can knock themselves out coming up with creative strategies to raise money and attract the best students, and some are better at these things than others. But bottom line, the core strength of any college or university is its faculty. A strong faculty will, if properly marketed, attract the best students; and a strong school as reflected in strong students and a strong faculty should, if properly marketed, attract money. A school with lots of money and strong students may, over time, be able to attract strong faculty, but that’s a somewhat riskier proposition because it requires academics (a risk-averse group, by and large) to take a flier on the prospect of an upward trajectory in institutional standing as measured by the yardstick that matters most to academics, namely prestige among their peers. </p>
<p>The dominant, tried-and-true approach, then, is to build faculty strength first, and use that strength to lure the money and the students necessary for an institution to move up into the top ranks. College and university presidents watch the movements of faculty like hawks. They know whose careers are on the upswing, who’s stagnating, who’s declining. They know who’s gone where, what kind of team that institution is putting together in that field, which institutions are gaining and which declining in what fields. (And this is not about the “strength of graduate programs,” by the way, except derivatively, insofar as the strength of graduate programs naturally tends to follow the strength of faculties). And they know where their own institution stacks up in the overall pecking order.</p>
<p>I take collegehelp’s data as broad confirmation of the wisdom of this strategy. “Build it (in this case, a great faculty) and they will come.” The money and the students will follow. There are a few outliers. It may well be the case that notwithstanding its superb faculty, UC Berkeley is so thinly resources and so heavily tilted towards graduate education that on traditional measures of excellence as an undergraduate institution it falls short of institutions that are its peers (or less) in faculty quality. And it may be the vase that due to peculiar features of geography and institutional culture the University of Chicago has never become as fashionable a place to go for an undergraduate degree as the overall strength of its faculty might suggest. But to my mind these are not damning indictments of the dominant “faculty-first” institution-building strategy, nor are they damning indictments of the validity of the PA score as a predictor of a particular institution’s place in the academic pecking order. Overall, the strategy seems to work–as collegehelp finds, in most cases the other pieces seem to fall into place. So UC Berkeley and the University of Chicago stand as cautionary tales at the margins: don’t rely exclusively on faculty quality, because there’s more to the story. But to those on CC who would entirely dismiss faculty quality as a factor to be considered in evaluating academic institutions, I would say simply that’s nonsense. You’re missing the central point of the academic enterprise, which is to build concentrations of academic excellence where breakthrough thinking and discovery can occur. From that, we all benefit. And towards that, over time, the smart money and the smart students will naturally gravitate in the majority of cases.</p>
<p>GoBlue81-
Yes, that was a typo in post 43. It should have read (as you said): “You could start with graduation rate or SAT.” Sorry about that. I was trying to make the point that the first variable you enter stepwise into multiple regression analysis accounts for the most PA variability and subsequent variables are used to account for whatever is unexplained by the previous variables. There is a “law of dimishing returns” that applies to the sequence in which variables are entered.</p>
<p>bclintonk-
Faculty quality could very well be the missing piece of the puzzle. It could account for some of the missing 7.5%. Do you know if there is anything in IPEDS that might indicate faculty quality? Anything from any source that I could enter into my data for these 200 universities?</p>