<p>I can’t resist joining this conversation, especially since Xiggi has attacked Smith. Xiggi and I go a long way back on this discussion of the validity of PA and the ranking of all-women colleges.</p>
<p>First, it cracks me up that PA, a factor always viewed as subjective, has been quantified by an algorithm. I share Hawkette’s concerns that it may not be predictive. If you can’t use it to differentiate among blind data sets, then it doesn’t work. Theoretically, if a school’s average GPA scores change, it’s PA should remain stable, simply because the perception of a school doesn’t change quickly.</p>
<p>PA doesn’t have to have to be rooted in the quantitative for it to be valid, although obviously it needs a value if it is going to be a factor in rankings. I’ve argued this for years. The perception of an institution and its graduates by other institutions weigh heavily in graduate school admissions and in recruitment of top faculty. It often is self-perpetuating. For instance, when my daughter, who is a Smith undergraduate, talks to a Princeton professor and says she went to Smith, she gets his immediate attention. That attention is visible. Why? Well, not only does Smith have a long tradition of first-rate education, but Smith alumnae who have gone on to graduate school have proven themselves to be well-prepared. The Princeton professor may not personally know a Smith grad (although most seem to), but he knows that the students from Smith are known to have certain qualities. Or maybe he knows a colleague at Smith. PA attempts to measure end result – that is, the intellectual quality of its graduates and faculty. It attempts to measure what the admissions stats (SAT, selectivity) do not: the quality of the education received once the students arrive. It reflects what happens when a graduate admissions committee sees an application.</p>
<p>This is what PA is – not a mathematical formula. </p>
<p>As for Smith’s SAT scores, Xiggi knows that Smith is dedicated to bringing diversity, particularly from economically disadvantaged families. Of course the SAT scores are lower. It is a fact that poor students with good grades still score low on the SAT. Those scores offset those in the 700s. Also, despite high academic achievement (as measured by grades), girls have historically scored lower than boys in standardized tests like the SAT, so, if you have an entire college of high-achieving women and compare it to a college of predominately male high achievers (Harvey Mudd), you’ll get a substantial gap. I suspect that’s why CH’s formula seems to be more accurate by giving an edge to all-female LACs; it is merely correcting for the difference in SAT scores.</p>
<p>If you’re looking for other correlates, I would think there might be some sort of effect on current ranking of lagged past rankings. sort of a “reputational hangover” effect.
such might not be limited to recent, either. A lot of people probably formulate their impressions about colleges during their own college hunt .</p>
<p>Do PA scores really benefit from the inertia of past reputation? I want to point out that the data I used was all from 2007-2008. Current data explains the current PA rating. </p>
<p>Personally, I don’t think colleges can rest on their laurels for very long. There are too many people watching the data. Word gets out. Furthermore, I don’t think the parameters used to benchmark colleges change very quickly. They are embedded in a complex social/economic system.</p>
<p>Am I really attacking Smith here? Or I am “attacking” the model that seemingly cannot explain the differences between a school such as Smith and a school such as Harvey Mudd. It is wrong to believe that rejecting the validity of the “mathematical” model behind the PA amounts to attacking Berkeley or Smith. </p>
<p>
</p>
<p>Bingo! And that is why attempts to correlate the results of the PA to verifiable data are completely dependent on the chosen criteria and weights. The PA is PURELY subjective and represents the tabulated result of a rather simplistic and poorly formulated question. People who support the use of the PA love to mention the fact that experts are answering the question, but reject the voices of the many experts who have exposed the PA’s vacuity and have refused to complete the PA part of the survey. </p>
<p>
</p>
<p>MWFN, that would be fine, except for the fact that CH models is predominantly based on </p>
<p>Yes, the PA is subjective but subjective impressions don’t form in a vacuum. Ratings like PA are created from the cumulative exposure to many facts and experiences in the real world. Our impressions form as a result of complex natural processes, not whim. The PA rankings come from people who live in the world of higher education and have many experiences on which to base their impressions. Just because people can’t put their finger on the exact reasons why they feel a certain way doesn’t mean their feelings are incorrect. On the contrary, sometimes such informed intuition is at the core of “wisdom” and “good judgement” and “common sense”.</p>
<p>My model proves that the PA ratings are not capricious.</p>
<p>CH,
It would be very interesting to see if your analysis worked for the same variables, but for a different time. The PA ranks have hardly budged in the last decade and yet the underlying data has changed significantly for some schools. If your predicting program somehow implies that “voters” actually consider these factors in their votes, then wouldn’t it also have been the case in 1999 or 1989?</p>
<p>That would be interesting indeed. The IPEDS data on the web only goes back so far. Perhaps I could dig up an old US News Best Colleges and try using their data. I don’t have time to do that right now but I will keep it in mind. It is a good idea.</p>
<p>PA scores do really benefit from the inertia of past reputation. One of the simplest reasons is that the results of last year’s ranking seem to be the brightest guideline for this year’s respondents! </p>
<p>If there was a way to analyze the correlation of the PA to the five prior years and do this on a rolling basis since it became a feature of USNews, it would show how much the PA is anchored in its past. However, such scientific display is not even warranted … all that is needed is to place the PA of the past decades in a spreadsheet to realize how the PA responds to change in admissions, graduation, and selectivity.</p>
<p>
</p>
<p>There is indeed plenty of data to watch, but in the case of the US News rankings, it takes quite a bit for changes to provoke changes in the ranking. Look at the example of Chicago … they only jumped up when they realized they could report their data differently. Since the PA is at the center of this discussion, everyone who looks at the rankings and especially the changes realize how much of a playing field “equalizer” it offers to a number of schools. In addition, schools that have experienced rapid increases in selectivity might not see great benefits since the penalty in the “expected graduation” seems to negate higher admissions standards. </p>
<p>As far as watching the data, I maintain that if there is one set of numbers the public should be able to see, it is the entire USNews survey, including the particular answers to the PA and the name of the respondents.</p>
<p>**Peer assessment is one of 18 indicators used by U.S. News to measure an instituion’s composite weighted score and
rank. The peer assessment indicator is based on interviews conducted by U.S. News with top academics (e.g.
presidents, provosts, and deans of admission) in which respondants are asked to rate peer schools’ academic programs
on a scale from one (marginal) to five (distinguished). Those who don’t know enough about a school to evaluate it
fairly are asked to mark “don’t know.”</p>
<p>Xiggi, It’s ridiculous to make each person’s survey public.</p>
<p>Maybe we should make everyone’s votes made public too.</p>
<p>PA is subjective.</p>
<p>If I state that SAT scores should be worth 25% of a school’s ranking…is that subjective?</p>
<p>It’s all subjective Xiggi.</p>
<p>What SAT scores mean is subjective Xiggi.</p>
<p>Deciding on what class sizes to use in rankings is subjective Xiggi.</p>
<p>How financial resources are derived is subjective Xiggi…at least the way USNWR does it . ;)</p>
<p>What is included in and what is not included in the rankings is subjecitve Xiggi.</p>
<p>Another point is that the PA rating used by US News is an average of many individual ratings. There is wisdom in the collective judgement of experts. The average rating is likely to be a truer rating than any individual’s.</p>
<p>CH, the PA is indeed not capricious … in general. Analyze the PA of 50 schools and your results will indeed show a number of undeniable correlations. The problem is that this does not do anything to minimize the incongruity of the PA for a number of schools. And it so happens that such schools are the ones that benefit from the inclusion of … intangibles. </p>
<p>At the risk of repeating a point I have made often, I do believe that the PA has a place in the rankings. Actually, I believe it should represent THE predominant ranking, and that there should be a secondary ranking with the data that represents the 75% of the current rankings. </p>
<p>However, for it to have more validity, the PA should be expanded to 10-12 specific questions. If the PA recognizes the reputation of the graduate school reputation, so be it! That is a very valid element, but make IT known. Right now, the PA is a variable that is NOT clearly defined … it is whatever someone wants it to be. Even USNews changes its " definition" from one year to another, or even between the printed and online versions. </p>
<p>In so many words, give US a PA that is clear, well-defined, and more complete. Make it public to eliminate the attempts at cronyism, gamesmanship and have the respondents standing by their responses and demonstrating their integrity.</p>
<p>Dstark, may I ask if you understand MY issue with the PA, or its use in the rankings, or the rankings in general. Do you think I have an issue about the PA’s subjectivity … per se?</p>
<p>While I don’t agree that transparency will help the understanding of PA, you are correct in saying that the PA numbers will change according to the type and the specifics of the questions asked. That’s always the case.</p>
<p>I’m not a fan of the USNWR rankings because it makes subjectively weighted data seem absolute. When I see students on CC saying that someone is silly for choosing, say, a number 12 college over a number 4, it is downright ridiculous. Numbers blind people to the various differences between equally excellent colleges.</p>
<p>Momwaitingfornew wrote…“I’m not a fan of the USNWR rankings because it makes subjectively weighted data seem absolute.”</p>
<p>That’s what I’m saying Xiggi.</p>
<p>The rankings are bs, Xiggi.
I don’t appreciate your attacking UCB as if that is the one big mistake in the rankings.
The rankings are bs , Xiggi.</p>
<p>Xiggi: welcome to East Coast bias. In one of its earliest issues, several publics were rated far higher than the blue bloods. The vast majority of magazine buyers live in the east. USNews makes money by selling magazines. The rating criteria changed. I’ll let you fill in the rest. </p>
<p>IMO: Just like 'SC gets all the (predominately eastern and midwestern) sports-writers votes in football for the west, Cal-Berkeley takes the academic votes for the west (or shares them with a Junior University on a Farm). And, yes, the PA absolutely is based in part (or in whole) on rep of grad schools – there is no way to separate them. Fortunately for Berkeley, it just happens to have its own place on the Periodic Table of Elements, which will be forever.</p>