The Wisdom of US News Peer Assessment Rating

<p>^What collegehelp did isn;t too difficult to do. I have no idea why no one has done this before. Probably because no one cares enough to. The toughest part is collecting all of the data and putting it into the program.</p>

<p>All collegehelp has really shown is that colleges that colleges with high PA ratings also tend to be colleges with high SAT scores, good graduation numbers, strong endowment, etc. This basically means that the presidents (as a whole) aren’t pulling numbers out of their asses and when a school has a high peer assessment rating, it tends to be a good school with strong students. If PA had weak correlations to all of these things and no good multiple regression could be made, then that would mean that the PA numbers were independent of things like SAT scores or graduation rates, and probably had no relation to objective data as to how “good” a school is.</p>

<p>bc,
Thank you for the effort you put into your last post. Let me be clear-I am not here to defend the USNWR methodology. I think that many of the flaws pointed out here by you and others (for both the objective data and the PA scoring) are mostly legitimate concerns. No way to know how widespread this is, but I don’t want to go as far in casting aspersions on the motives/actions of various schools that might have outperformed. </p>

<p>The key point for me is that the four pillars of assessing a college’s undergraduate environment are still valid, regardless of a particular company’s ranking methodology. Personally I focus on these and the datapoints that underline them (some of which are in USNWR, many of which are not). </p>

<p>My longtime suggestion to prospective students is to only consider those individual elements that are important to you and relevant to a school, eg, why should anyone care about UC Davis’s Alumni Giving rate? That is why I created that series of threads on the various USNWR datapoints. Give the reader the information and let him/her decide its usefulness. I also believe that reliable information about how schools may be creating/calculating/presenting their data is good. More information and disclosure is definitely better than the alternative.</p>

<p>momwaiting,
The issue of “quality” and “type” of teaching is a tough one. I think you and I probably know it when we see it and there are lots of data sources that provide some insight into this. If you go thru enough of them, certain patterns repeat and that is how I’ve developed my sense of which colleges are more committed to a great classroom experience for undergrads. The same criticisms of subjective judgments that many of us have with PA would apply here. However, at least in judging “teaching,” we’re not arguing about what is being graded.</p>

<p>Momwaitingfornew-
Yes it is a pretty amazing result. It proves that the PA rating is valid to a degree that is rare in social sciences. </p>

<p>Venkat89-
I actually have done this before a couple of years ago using US News data plus NRC ratings and obtained a very high R-squared. This time I used IPEDS data with even stronger results.</p>

<p>All this talk about how defective the US News ratings are is way out of proportion to the actual problems. There is very little randomness. They seem to be highly systematic and orderly. There is no evidence of a systemic problem or systematic bias.</p>

<p>“The key point for me is that the four pillars of assessing a college’s undergraduate environment are still valid, regardless of a particular company’s ranking methodology. Personally I focus on these and the datapoints that underline them (some of which are in USNWR, many of which are not).”</p>

<p>Hawkette, if you think these 4 pillars are valid, they are valid ( I’m not kidding).
However, as Bclintok points out so well in post #217, this doesn’t mean that USNWR’s rankings data or rankings themselves are accurate. Because you like small classes, this doesn’t mean the data in USNWR that you use is accurate. It doesn’t mean that one school is better than another based on USNWR’s information. I know you don’t really think that exact point, but your posts come out that way.</p>

<p>The other thing that USNWR, and no other publication can do either, is show how individuals at the same school can have very different experiences. I can take Chem 1 from one professor at a school, another student can take the same course from another professor and we can have two very different experiences. I also may prefer larger classes or more individual study, another person may like small classes or study in small groups. One professor may do a better job teaching than another. There is no one way fits all. Not even close. </p>

<p>At UC Berkeley, the students are not all having the same educational experience, nor do they want the same educational experience. So many variables. There are small classes there (70% are classes with 35 or fewer students, and yes these numbers may not be accurate). There are honors classes at the school. There are research opportunities, seminars, individual opportunites at the school. You’re limited by your own abilities, time, individual preferences. USNWR does not capture this, and it can’t. The same with other schools.</p>

<p>

</p>

<p>I don’t believe your methodology is much different from what actually happens, except that university presidents, being firmly rooted in academia, know peer institutions much better than many people here on CC give them credit for. I’ve seen in-university reports that recognize which competitors are on a par and which are above and therefore institutions to emulate. Non-academics may be surprised by how well upper administration know other schools. CCers have no trouble imagining that a regional admissions officer knows the rigor and education level of specific high schools within his applicant pool, so why do people question whether college presidents have a similar knowledge of the peer institutions? </p>

<p>

</p>

<p>I agree with your advice, although I do have an explanation for why the Annual Giving rate may be important: alumni with positive undergraduate experiences are more likely to give back to the university. Those who felt ill-served or who were generally unhappy tend to graduate and run.</p>

<p>

</p>

<p>Dear MWFN, would you maintain the same opinion after reviewing the following results of a different PA model for LACs.</p>

<p>Please note to remain consistent with the current discussions about schools, I will point out to a few schools in particular, Especially since there were questions raised about the “discrepancies” of the PA between certain schools. </p>

<p>The variance between Mudd and Smith was more than a full point on a five point scale.</p>

<p>4 Harvey Mudd 4.41 4.1 - Model predicts a 4.41 - PA is 4.1 - Difference Negative .31
12 Wellesley College 4.15 4.5 - Model is 4.15 - PA is 4.5 - Difference Positive .35
42 Smith College 3.60 4.3 - Model is 3.6 - PA is 4.3 - Difference Positive .070</p>

<p>And then look at Reed’s result (according to model) … 3.43! Doesn’t that make a few fans of Reed cringe? </p>

<p>

</p>

<p>Different models; different strokes!</p>

<p>xiggi: it’s a guy thing! :D</p>

<p>xiggi-
It doesn’t surprise me in the least that the two models have somewhat different results because there were different input variables. The model presented in this thread is better at explaining PA. It uses input data from IPEDS. I believe that older model used data from US News. The current model predicted Reed’s PA exactly.</p>

<p>Smith and Wellesley are both 7 Sister schools with a pedigree that easily predates the Claremont Colleges becoming significant players. That’s why they continue to enjoy a high PA. They were and remain to many the female version of the Ivy League. The number of famous alums from both schools easily supports their PA.</p>

<p>Reed is weird and does not play well with others.</p>

<p>Xiggi: Of course, there’s a difference between an algorithm that quantifies the current PA and one that changes it.</p>

<p>If Smith were ranked at #42 on someone’s list, so be it. It doesn’t change the education or the ability for alumnae to be accepted into grad programs.</p>

<p>collegehelp, it seems that the error for your PA predictions are about +/- 0.4. Do you think that is a bit high because PA is out of 5 points? How do the residuals look? (god I’ve taken too much stat).</p>

<p>collegehelp, how did your numbers from a few years ago look? Does PA correlate better with other data like SAT scores and graduation rate now than in the past?</p>

<p>

</p>

<p>Right now, Smith and Wellesley are the nets right below the Ivy League for many women. Because 1. it’s tougher than ever to get into an Ivy League and 2. it’s even tougher for women to get into the Ivy League, many on those campuses had Ivy-comparable stats but were not accepted by the tippy top schools. Some turned down Ivies because they either wanted a smaller campus or were offered merit awards. This creates a student body of high-achieving, ambitious women.</p>

<p>Venkat89-
Here is the link to the old thread.</p>

<p><a href=“http://talk.collegeconfidential.com/college-search-selection/412606-how-calculate-universities-peer-assessment-score.html?highlight=peer[/url]”>http://talk.collegeconfidential.com/college-search-selection/412606-how-calculate-universities-peer-assessment-score.html?highlight=peer&lt;/a&gt;&lt;/p&gt;

<p>

</p>

<p>MWFN, I hope you realize that both algorithms were created by CollegeHelp; this means that it is the same “someone’s list.” </p>

<p>The older model used the following factors:</p>

<p>

</p>

<p>The latest one used </p>

<p>

</p>

<p>And as far as “It doesn’t change the education or the ability for alumnae to be accepted into grad programs.” By now, you should know that I could not agree with you more. And, I believe that you know what is the basis for my opinion about Smith … I simply believe what the parents have been saying for years about their experience. </p>

<p>I maintain that the explanation for the PA of schools such as Smith has NOTHING to do with natural logs of the number of freshmen (?) and squares or cubes of SAT scores! It has everything to do with what someone just called a “predating” pedigree and a healthy dosis of … intangibles. </p>

<p>Intangibles and data are strange bedfellows!</p>

<p>Xiggi, last try on this thread…</p>

<p>What is on the PA survey that you find objectionable?</p>

<p>Is the survey public anywhere?</p>

<p>I don’t think collegehelp was trying to show that PA has anything to do with the number of freshmen or the SAT score. He was just showing that you could predict how prestigious a school is based on SAT scores, graduation rate, endowment, and other objective factors. If you think about it: Harvard is the #1 PA. It has one of the highest SAT scores, highest endowment, highest graduation rate, etc. </p>

<p>It is interesting that there is a significant difference in PA scores of public vs private schools. I’m not familiar with how SAS does categorical variables. Does that mean an equivalent public school has a PA of 1.7 lower than an equivalent private.</p>

<p>collegehelp, have you tried seeing how differences in class size might affect PA differently for publics and privates?</p>

<p>Haha - Dstark, you won’t let that one go! </p>

<p>Here are the Directions for Overall Ratings:</p>

<ol>
<li><p>Please rate the academic quality of undergraduate programs at the following schools in the liberal arts colleges category. These schools are primarily undergraduate colleges that award fifty percent or more of their baccalaureate degrees in the liberal arts.</p></li>
<li><p>Please review the entire list first, considering each program’s scholarship record, curriculum, and quality of faculty and graduates.</p></li>
<li><p>Using a black pen, rate each school with which you are familiar on a scale from marginal (1) to distinguished (5) by marking an “X” in the corresponding box. If you are not familiar with a school’s faculty, programs, and graduates, please mark “don’t know.”</p></li>
</ol>

<p>For instance, the list for a few states of schools are (or were in 2006):</p>

<p>CALIFORNIA
California State University – Monterey Bay 032603
Claremont McKenna College 001170
Harvey Mudd College 001171
The National Hispanic University 025184
Mills College 001238
Occidental College 001249
Pitzer College 001172
Pomona College 001173 (35)
San Diego Christian College 012031
Scripps College 001174
Thomas Aquinas College 010448
University of Judaism 002741
Westmont College 001341
Whittier College 001342</p>

<p>MASSACHUSETTS
Amherst College 002115
College of the Holy Cross 002141
Gordon College 002153
Hampshire College 004661
Massachusetts College of Liberal Arts 002187
Mount Holyoke College 002192
Pine Manor College 002201
Smith College 002209
Wellesley College 002224
Wheaton College 002227
Williams College 002229</p>

<p>MISSISSIPPI
Millsaps College 002414
Tougaloo College 002439</p>

<p>Venkat89-
I have not tried to look at class size, public vs private in this analysis but it would be interesting.</p>

<p>The residuals were generally within .3 and a plot of residuals showed they were uniformly distributed across almost the entire range of PA except that the model tended to overestimate the PA a little in the lowest ranges of PA (the bottom 20 or 30 of the 200 schools in the sample).</p>

<p>Xiggi, thanks.</p>

<p>Any more info you can share?</p>

<p>Are you aware of any place that this can be seen publicly?</p>

<p>The factors I used are highly intercorrelated. For example, SAT scores were highly correlated with graduation rates. I wondered if I could simplify the data by identifying categories of factors. I used a statistical technique called “principle components analysis with varimax rotation”. The results don’t give you a precise definitive answer; you have to interpret the results.</p>

<p>My interpretation is that you can boil all the data I used down into:</p>

<ol>
<li><p>academic quality which includes
SAT scores
graduation rate
retention after one year
admissions rate
endowment per FTE (shared)
peer assessment</p></li>
<li><p>size which includes
number of freshmen
number of bachelors degrees awarded</p></li>
<li><p>distinguished faculty which includes
National Academy of Science members
endowment per FTE (shared)</p></li>
<li><p>spending priorities which includes
research expenditures percent versus
instructional expenditures percent</p></li>
<li><p>yield</p></li>
<li><p>academic support expenditures percent</p></li>
<li><p>highest degree awarded</p></li>
</ol>

<p>What I am saying is that the above categories are relatively independent of one another but the individual factors are highly interrelated within each category.</p>