What does USNEWS peer-accessment score really measure?

<p>which universities do you think are underrated, Alexandre?</p>

<p>According to the Peer Assessment score, I think the majority of underrated universities are the small, non-research type schools like Georgetown, BC, Tufts, Rochester, Rice etc...</p>

<p>Alexandre:</p>

<p>That only makes sense. Virtually all of the college administrators are trainded academics. The very nature of academia these days is so focused on research that it would difficult to not weight that aspect heavily in the peer assessment scores.</p>

<p>Consumers of the USNEWS data can avoid a lot of pitfalls by not paying attention to small differences in ratings. Use the peer assessements as a "broad brush" -- a 4.5 is better than a 3.5 which is better than a 2.5. But, ignore miniscule differences like a 3.5 versus a 3.6.</p>

<p>I agree. I always say that small differences in the Peer Assessment score are not worth noting. But what indicators do you suggest should be used to rate a university?</p>

<p>Alexandre:</p>

<p>Truthfully, I don't believe that any universal rating system is valid. To me, the whole game is to identify which (of many) characteristics of a school are important to an individual student and then find schools that match those characteristics.</p>

<p>For example, how can you "rank" Williams versus UMich? Both are as good as it gets, but the immense difference in size makes the undergrad experience so different that it would be difficult to even compare them. One student would justifiably rate Michigan much higher; the next would be equally justified in rating Williams much higher. In fact, if you love one, it would almost be unthinkable to not dislike the other. And, both conclusions would be equally valid.</p>

<p>To me, the starting point is start with the A-level characteristics -- size is a good place to begin. Visit representative large schools, medium schools, and small schools. Visit an urban school, a suburban school, and a rural school. </p>

<p>I think that a parent's role here is to play devils advocate to make sure that fair consideration of the strengths of each type of school leads to an informed preference. In my family, I was the devil's advocate for large state universities, both in making sure that we visited several really good ones and in highlighting the unique benefits of these schools -- breadth of resources, true diversity of interests in the student body, amazing campus and college-town "energy". Another family might be well-served by a devil's advocate for small "boutique" undergrad colleges, again just to promote informed choices.</p>

<p>Start to narrow the search accordingly as preferences emerge. Then, at least, you can start comparing apples to apples. </p>

<p>Statistics that I would then consider include, but are not limited to:</p>

<p>Peer assessment ranking
Per Student endowment (or spending)
Median SATs
Diversity stats
% qualifying for financial aid
% varsity athletes
% frats/sorority
% PhDs
binge drinking rate</p>

<p>Some of these are "either/or" considerations depending on the priorities of the individual student. For example, a high percentage of frats could move a school up one applicant's list and down another's. Same thing with a high rate of PhD production -- positive indicator of "colllege culture" for some applicants; negative for others (high geek index). Some of these metrics may not matter at all to a particular student; some of them may be very important. Diversity is an example of a metric that may be a mandatory feature for one applicant and totally irrelevent to another.</p>

<p>No single measure in isolation tells much of a story. But, combinations of these statistical measures start to paint a picture of the priorities of the school, the type of students the school attracts, etc. A school with high diversity stats that produces a lot of PhDs, doesn't have a football team, and has a low binge drinking rate is going to have a very different "feel" than an identically "ranked" school that is 87% white, has 60% frat membership, only 30% receiving financial aid, and produces mostly lawyers and MBAs. One is not better than the other, but would certainly be more or less comfortable to a particular student.</p>

<p>I think the biggest mistake college hunters here make is diving into relatively small details (like trying to rank undergrad Psych departments) before they step back and consider the "big picture" characteristics of very different types of schools.</p>

<p>At the end of the day, if I ended up with a dead-tie between two similar schools at the top of my list, I'd pick the one with the larger per student endowment (or per student spending). It is no coincidence that the wealthiest schools tend to be the most "prestigious".</p>

<p>But interesteddad, the only constants are the three I mentioned above...quality of academics (peer assessment score), resources (endowment, operating budget, per student spending, class size etc...) and quality of student body (% graduating at the top 10% of their class, mean unweighed GPA, toughness of HS course selection and mean SAT/ACT scores).</p>

<p>The other points you put forth should not be part of a ranking. Those points should be separate and up to the individual student to determine.</p>

<p>I agree that fit is very important. In fact, fit is the most important factor in chosing a school. But since fit is personal, it should be determined by the individual students, not by some ranking.</p>

<p>Alexandre:</p>

<p>I agree with your three categories, but unfortunately there aren't measures that are universally applicable.</p>

<p>You could come pretty close to an absolute ranking by looking purely at per undergrad spending. Even though consumers don't really recognize it, the reason they flock to the most "prestigious" schools is because these schools offer the biggest discounts and best value. For example a very heavily endowed private school at the top of the US News charts may be spending $75k to $80k per student. So, even if I'm paying $40k, I'm getting a heck of deal. </p>

<p>The huge per student spending buys a lot of really attractive stuff (fancy science labs, diversity, great professors, etc.) and over time, it is that fancy stuff that has built prestige at the top schools. Great students flock to these schools adding additional value through strong peer effects and now you have the complete package.</p>

<p>If you don't care about fit, just go to the school with highest per student endowment you can get into. Just follow the money.</p>

<p>Unfortunately, it's not quite that cut and dried. Per student spending for undergrads is usually impossible to determine for universities, because the operating budgets include all kinds of things that have nothing to do with undergrads or are only partially applicable to undergrads. For example, how do you isolate the spending for the Medical School? Or the research revenues/expenses? </p>

<p>So you often end up with schools that have the biggest endowments in the world, but can only afford to hire grad students instead of professors to lead discussion sections? The only conclusion to be drawn is that the money is not being proportionally spent on undergrads. What does an university-wide per student spending number really mean in that context?</p>

<p>*"Actually, I think that the Peer Assessment score should count for 50% of the total ranking, with 25% going to resources (class size, faculty availlability, spending and endowment/student etc...) and 25% to the quality of the student body." *</p>

<p>50%? Why stop there, let's push it all the way to ... 100%. This way, the USNEws could sell two distinct reports. The first one could ranking all the schools according to the cherished PA. My recommendation for the report's name: the AFFA Report, standing for Absolute Fabricated and Full of Air Report. The second edition could contain the "other" common statistics that have been known to be somewhat less subject to the abject manipulation of USNEws and their accomplices. </p>

<p>"People do not understand this, but the peer assessment score is actually quite accurate and very telling. I do believe that the peer assessment score should be more seriously regulated, but it is the best indicator of academic excellence.'</p>

<p>Alexandre, people DO understand that the PA is very telling. However, you find it quite accurate because it support your own conclusions. The reality is that a variance of .1 in the peer assesment influences the ranking more than substantial changes in the remaining category. A school could jump from 30% admission rate to 80%, drop the SAT from 1300 average to a 1000, and see no changes in the rankings. </p>

<p>Other people who are less biased tend to pay attention to the frequent criticism of USNews. There have been sufficient reports about the cynical manipulation of the numbers by college officials and the absolute lack of integrity in the surveys. </p>

<p>The USNews rankings are a joke and mainly BECAUSE of the Peer Assessment and to a smaller part because of some faults in their methodology. The only value of the USNews report is that the compilation of the objective data is a great time saver.</p>

<p>Xiggi, like you I think the USNWR is a joke. I think the peer assessment score must be taken with a grain a salt. It is not an accurate measure to say the least. But at least it measures something meaningful...the opinion of academe. That's important because anybody who wishes to apply to graduate school is going to be bound by which undergraduate school they attended. Is it any surprised that 40%-80% of graduate students at elite graduate programs come from the top 1% of the universities? But people are too bogged down with the little details. Is there a difference between #1 #30? Is Harvard truly far superior to Wake Forest or William and Mary? Is Amherst truly far superior to Reed? </p>

<p>Little differences should be ignored. That is why I like Fiske. He lumps universities into large groups of roughly equal universities.</p>

<p>Alexandre, I agree with your points. </p>

<p>I do think that the peer assessment COULD have its place in the report. For instance, the sections that rank the best business schools for particular majors is relevant. The result is a small ranking that was generated by polling business schools. From those rankings, a student can derive that Babson reputation stems from a leading program in Entrepeneurship, Texas and UIUC for accounting, etc. </p>

<p>I also would not have a problem if USNews were to print a "ranking" based solely on peer assessment. My problem is that they mix apples and oranges since they use -predominantly- a highly subjective criterion to establish a ranking that is supposed to follow a solid scientific methodology. </p>

<p>I think that USNews would show a lot more integrity by telling unsuspecting students that they can and do manipulate the rankings by changing the methodology and weighing the elements to follow their yearly desire. If USNEws wants Princeton to be first in 2004 and third in 2006, they can accomplish that easily. If USNews insists in maintaining Wellesley in a position just behind AWS, they simply have to massage a bit the criteria to bury Wellesley's much lower selectivity numbers. If USNews wants to "punish" a school with rising selectivity, higher SAT scores, and grade deflation, how hard is it to play games with a criterion such as the expected graduation rate? </p>

<p>The sad reality is that there are MANY students who decide on their final applications' list by simply taking the top schools "ranked" by USNews.</p>

<p>What does the US News Peer Assessment score really measure? It seems to primarily measure selectivity (and therefore all the things that determine selectivity). I picked 25 National Universities more or less at random, a few from each tier, that ranged in peer assessment score from 4.9 to 1.8. The correlation between peer assessment and 75th percentile SAT was nearly perfect (.95). You could say that 90% of peer assessment is based on things related to selectivity.</p>

<p>co-relation doesn't mean causation.</p>

<p>Or maybe the best students are attracted to the best faculty.</p>

<p>"What does the US News Peer Assessment score really measure? It seems to primarily measure selectivity (and therefore all the things that determine selectivity). I picked 25 National Universities more or less at random, a few from each tier, that ranged in peer assessment score from 4.9 to 1.8. The correlation between peer assessment and 75th percentile SAT was nearly perfect (.95). You could say that 90% of peer assessment is based on things related to selectivity."</p>

<p>Try that on the LAC rankings, and let us know how the correlation works for the top 15. Then for fun, plug the Harvey Mudd and Wellesley numbers for selectivity in your model. So, do you still believe that 90% of the PA relates to selectivity? </p>

<p>Also, the problem is not between a 4.9 and a 1.8, but between a 4.9 and a 4.0. That is where the games are played,</p>

<p>I tried the top 15 and got a moderate correlation of .57. However, it was still statistically significant. I want to make the following points, however: (1) 15 is a small number for calculating a correlation (a single pair of numbers is influential), (2) there are conceptual and statistical problems when you truncate the true range of numbers, and (3) a low correlation can artifically result from limited variability (8 of the 15 peer scores are in a range 4.1 to 4.3). Whats more, Smith College is in the top 15. Smith is an atypical anomaly with high peer assessment (4.3) but relatively lower SAT (1370). When you remove Smith from the top 15, the correlation increases to .75 (very high). I plugged in the entire top 50 LACs into my software and came up with a correlation of .82 (very high). Then, if you add a few (more or less random) colleges from the 2nd, 3rd, and 4th tiers with Peer Scores down to 1.8. the correlation increases to .93 (nearly perfect). Peer Assessment measures characteristics related to selectivity among LACs, too.</p>

<p>Women's colleges should be eliminated as they are not typical.</p>

<p>well, of the 217 liberal arts colleges, they comprise about 60 (Bryn Mawr is included although it grants Ph.D's and that's a whole other issue (what is a LAC?)). So 60/217 is more than a fourth and is not that atypical, is it?</p>

<p>And that is exactly my point:</p>

<ol>
<li><p>Even in the presence of a valid argument to consider some schools atypical, the US News does not make any adjustments for non-coed schools. For what it is worth, I find the argument to be hollow, especially since girls earn better grades in HS (so must have higher rankings) and that the differences in standardized tests' scores are trivial. </p></li>
<li><p>For Wellesley and Smith, there is little correlation between selectivity and peer assessment. However, if there is a near perfect correlation when using a larger pool, the only conclusion is that the peer assessment for the named schools is atypical. It is up to the reader to find the most plausible cause for this "atypical" score. As fas I am concerned, it is blatantly obvious as Barney's song just started to play in my mind.</p></li>
</ol>

<p>" I love you, you love me
We're a happy family"</p>

<p>
[quote]
For Wellesley and Smith, there is little correlation between selectivity and peer assessment.

[/quote]
</p>

<p>There is if you accept the fact that the peer assessment is a lagging indicator and represents the collective view of the academic community 30 years ago.</p>

<p>The declining selectivity of the womens' colleges has nothing to do with a decline in the quality of their product. Rather, they had their market pulled out from under them when every "good ol' boy" school in the country went coed. The women flocked to Harvard, Williams, Claremont-McKenna and the women's colleges were left looking for customers. They were no longer the beneficiary of an artificial market. Fewer applicants = declining selectivity.</p>

<p>The declining selectivity will eventually reduce their peer assessment scores, but not for a while because it is such a lagging indicator.</p>

<p>For now, the womens' colleges are probably the best "admissions values" around. Terrific schools that are relatively easy to get into.</p>

<p>I believe that with its ritzy Boston suburb location and billion dollar endowment, Wellesley would be the most selective LAC in the county by a mile IF it had successfully gone co-ed. Of course, that's a hypothetical. I can't think of a woman's college that has successfully gone co-ed. Vassar probably comes the closest.</p>

<p>On another note: I think USNEWS woefully understates the seletivity and ranking of Harvey Mudd. They should forget the acceptance rates and just rank selectivity by SAT scores. It would provide a more accurate snapshot of how hard schools are to get into.</p>

<p>Different schools use different measures to accept students. Some schools disregard standardized test scores or put them on the back burner, other schools use them as one of the main two or three reasons to accept or reject an applicants.</p>