<p>As a purely broad statement about the USNWR list, I think if the college you like is either already rated high or is moving up in the rankings, then the metrics are fine. If, however, the college you like is either already lower in the rankings or dropping, than the metrics suck. I have seen so many people brag about how high their college is ranked until it begins to drop. Once that happens, the same people will talk about how you can’t trust the rankings. </p>
<p>The one thing that I do agree with is that the Peer Assesment needs to be changed. The fact that 25% of the rank is based upon a measure that most people agree is not the most scientific, and can be easily manipulated, makes the PA dubious at best.</p>
<p>^^ Only if a large percentage of them aren’t full time and don’t have PhDs (some former govt higher-ups do). Even then these two factors together only count for 4%. The PA difference matters more to the ranking. I doubt it has much to do with anti-Catholic prejudice. Berkeley is a major research university in 20 or 30 fields with lots of discoveries, patents, inventions, publications, and prizes to its name. That must carry some weight in the Peer Assessments.</p>
<p>“The one thing that I do agree with is that the Peer Assesment needs to be changed. The fact that 25% of the rank is based upon a measure that most people agree is not the most scientific, and can be easily manipulated, makes the PA dubious at best.”</p>
<p>You just assume that the so called objective numbers are accurate as well. Those figures are easier to manipulate than the PA score.</p>
<p>Easier to manipulate than the PA score? As in e-a-s-i-e-r? You ought to be kidding! While Statistics MIGHT belie the truth, there is never doubt that pure fabrications ALWAYS do! </p>
<p>Even if the numerical data were easier to manipulate, traces of the manipulations cannot be erased forever. It is hard to report fictitious data year after year, as well as to submit different data to different recipients, especially when some of the recipients are government agencies.</p>
<p>On the other hand, except for the public embarrassment --that happens when one of the august officials gets caught redhanded-- there are no repercussions to the outright lies and incredibly misleading and ignorant replies that adorn the PA surveys. </p>
<p>For some reason, I think you must believe that the Clemson official --and the many who were also exposed-- who filled the PA survey in such non-sensical manner was … actually correct!</p>
<p>The PA scores from a single individual might be askew but the PA scores reported by US News represent the average collective professional judgements of many individuals. </p>
<p>PA scores seem to capture just about everything that is important about a college. </p>
<p>PA scores are almost completely predictable by hard data. If PA scores were random, they would not be so predictable.</p>
<p>^^^ well said, xiggi. The PA comments are nothing but bloviating baloney and designed to help their friends at the top of the heep and continue to depress the schools that seek to be included in the elite ranks. Its beyond sickening. But we also all know that academics are some of the most skilled at manipulating the truth about everything…and how political they are to each other when seeking tenure etc. Academia is anything but fair and balanced.</p>
<p>"Does Georgetown not have any professors in other disciplines? Are their math professors ex-CIA as well? " This Could Be Heaven.</p>
<p>Well, Georgetown being where it is, has always employed many high level government officials from State, CIA, DOD and the WH. Its location. Not anything else. </p>
<p>And Georgetown is also the favorite college for children of government officials. Some people are attracted to that environment and some are put off by it.</p>
<p>The PA score is easy for individuals to manipulate but there are over 1000 respondents. So it would have to be quite an elaborate conspiracy to move enough of them in unison to significantly affect the ranking of a school. However, I think a bigger problem is the potential for mass “halo effect” psychology even without deliberate manipulation.</p>
Your analysis is wrong. It is certainly easy for college officials to falsify their own PA report, but any real impact necessitates collusion with other schools. Other data remain within the jurisdiction of the college at all times.
I don’t think the implication was that the data would actually be falsified. Rather, policies can be undertaken to modify other figures without any tangible impacts.</p>
<p>That isn’t to say that the PA system is good - I am not a big fan. However, it does seem to serve its intended purpose fairly well.</p>
<p>USNWR have totally changed the college admissions system. Colleges now admit students in order to increase their rankings. With higher admissions rates for ED students, they increase their yield because more of the accepted students attend the school. They encourage unqualified students to apply by saying “There’s no real formula to get into X school!” so that dummies will apply and can easily be rejected. In the number of faculty, they include TAs and part-time faculty, so what good is it to have an 8:1 student/faculty ratio if half of your teachers are incompetent or never there?</p>
<p>Noimagination, I believe this is the second time in a row that you’re missing my point entirely. Reading what I actually quoted in my post might help next time!</p>
<p>Our Michigan fanboy wrote that it is EASIER to manipulate the so-called objective data than to manipulate the PA. The fact that this manipulation is effective or not is NOT relevant here nor is the need for collusion … only that the statement made by rjkofnovi is impossible to substatiante. That was my point.</p>
<p>
</p>
<p>No doubt that it works as intended. Morse and his acolytes are making no secret about the intent of the PA and the reason why a 25% weight is needed. It is called leveling the playing field for … intangibles. Although someone has some circular data to prove that the PA can be correlated to the hard data as well. All is well, we shall assume! Until one starts pointing at the outliers that defy the beautiful world of averages!</p>
I think you have to consider some sort of efficacy in order to make claims on this topic. The question is whether they can manipulate - not try to manipulate - the PA score more easily than other data.</p>
<p>I do agree that none of us are really able to decide which metric can be most easily altered.</p>
<p>As for the first instance (I assume you mean my post #23), my point was that your position was not consistent with hawkette’s. Since the post you quoted was entirely directed at hawkette’s argument, it struck me as an important distinction. Hope that clears things up.</p>
<p>Endowment is more a RESULT of academic quality than a CAUSE of academic quality.</p>
<p>There is a lot of redundancy among the US News factors. As I recall, selectivity is the single most important factor contributing to reputation.</p>
<p>And reputation (read PA) is BY FAR the single most important factor contributing to the end ranking of the US News. As far as the correlation between selectivity and reputation, that is far from universal as simple examples as West Coast LAcs versus Non-Coed LAcs clearly underscore. To be precise, are Wellesley or Smith more selective than Pomona or Harvey Mudd? Hardly!</p>
<p>I’d wrote “yeah,” didn’t I? That doesn’t indicate agreement?</p>
<p>If 6-year graduation rate accounts for 21% of the ranking, I honestly don’t give a **** how many categories it’s broken down into. I know it serves a neat rhetorical effect for you, oh holy one, but it really makes no difference.</p>