USNWR Circular Ratings

<p>You should all know that the prestige schools all have a vested interest in retaining their spots and they have NO intention of losing them. Thus, they will continue to downgrade other schools and give themselves the big thumbs up. Is there any collusion going on? Hmmm…not sure about that…but would it surprise me if we found any evidence of it? No. </p>

<p>Rankings are for the rankings obsessed and for 18 year olds who want a simple and easy way to get themselves “prestige points” among their friends and neighbors. </p>

<p>In a perfect mathematical world of equal rights for all colleges, perhaps, once you are predetermined to be college eligible by some method, everyone would be put into a giant computer lottery and “assigned a school”, thus Harvard would have the same number of kids with an 1100 SAT score as 1600 score, and the same number (percentage wise) as any other school, thus spreading out the smart kids all across the nation and mixing in the middle kids and the marginal kids. The single biggest reason that kids don’t apply to lower tier schools is because they don’t want to be in a classroom full of kids who they perceive to be less intelligent and also less prestigious. If you fix that mathematically, the problem goes away. Call it an “SAT-socialist computer model.” </p>

<p>Of course, that will never happen (nor am I suggesting it should be. I am very UN-socialistic.) </p>

<p>On the other hand, I am a big proponent of supporting second and third tier schools because each school has a mission statement and serves their communities and graduates very well, and thus society as a whole. A degree from a low third tier school is better than no degree at all and working in low wage jobs.</p>

<p>PA measures a survey respondent’s impression of a University/Collegs’s strengh going back at LEAST 50 years.</p>

<p>Think of the person filling out the PA survey. Let’s say it is a 55 year old in the Chancellor/President’s office. This person likely first heard about schools in 9th grade, 40 years prior to filling out the survery. They developed first impressions about which schools were at the tippy top, top, middle/top, middle, below middle, etc. They then applied to schools in 11th grade, 38 years ago, refining their perception of the pecking order. Now they’re in college, do well, and apply to graduate schools. Again, they revisit the pecking order and firm up their own opinions about pecking order. Now they’re in grad school, and compare notes with other grad students about their college experience. They graduate, go on to apply for teaching positions in schools all over the country, and further refine their perception and opinions about schools all over the country. Then they move into university administration, first in their Department, then for the University. They meet colleagues from all over the country, and further refine their opinions/perceptions.</p>

<p>So, the survey respondent, in filling out their scores, is drawing upon personal perceptions from 41 years ago (the most powerful – first impressions), then 39 years ago, then 35 years ago, and so on. This aggregation of impressions about rank order is then influenced by published rankings over the past 20 or so years.</p>

<p>Thus, the PA is really a rolling rank order that includes perceptions of the respondent that are at minimum, 41 years in the making.</p>

<p>There are really only a few Universities that have moved significantly in PA and USNWR rankings over the past 30 years. The two that immediately come to mind are WashU and USC, and possibly Vanderbilt as well. These schools do in fact prove that it is possible, however unlikiely, for schools to move significantly in the rankings over a 25 year period.</p>

<p>

</p>

<p>Occam’s razor suggests no…</p>

<p>

</p>

<p>Not necessarily, if you take into account opportunity costs…</p>

<p>It may not be worthwhile for one to give up four or more years in the workforce to get a degree which may only marginally enhance one’s earning potential, if that. This is especially true for certain majors.</p>

<p>“Certainly you would expect PA to remain relatively constant. Why would you expect it to change much? It’s not as though colleges change dramatically year to year.”</p>

<p>Bad logic because it doesn’t account for the possibility of a college actually improving in a short span of time and having that be reflected in a rising PA score.</p>

<p>“Occam’s razor suggests no…”</p>

<p>OOOh, I loved Jody Foster in “Contact” too.</p>

<p>Colleges have hundreds if not 1000’s of faculty, relatively fixed buildings and other physical assets (libraries, labs, etc) and even pretty fixed student bodies in the short term so there is little reason for them to change ranking over even a few years in a significant way. Even high turnover in faculty might be 5-10% in any year with most much less. Colleges just don’t change overnight even though some may seem to think they do. Facts don’t confirm that.</p>

<p>barrons, what does the above have to do with the research findings? It’s a question, not a put-down! :)</p>

<p>I think the researcher mentioned in post #1 misinterpreted his findings. He drew the wrong conclusions from his observations.</p>

<p>Which conclusions do you mean? A correlation is not a conclusion.</p>

<p>I am referring to his conclusion that peer assessment ratings only measure past rankings and not other independent factors. This is clearly untrue.</p>

<p>That’s not their conclusion; they found one correlation, and not others. You’re suggesting that their arithmetic is wrong. It’s possible, but I think unlikely.</p>

<p>“If you want to know what your income will be you can get a pretty good idea from the last year’s income. Shocking.”</p>

<p>Horrible Analogy</p>

<p>Barrons, I’m disappointed . You usually have better arguments than this. Salary is based on quantifiable data. Peer assessment is based on opinion, at beat and has been shown, and published that it is really based on bias.</p>

<p>Long ago a pecking order was established in the academic world. Back then the “insiders” had all of the information (even if all of the information was not widely spread within their own world) and the opinions of the public could be determined by what the academics told them. </p>

<p>Today the internet has changed all of that with huge access to all kinds of datapoints and a much more involved universe of people (like us! :slight_smile: ) evaluating the data and drawing conclusions. And it’s not just the greater scrutiny that is brought to the process but also different values from many in academia who traditionally have worshiped at the research altar and given shorter shrift to the average undergraduate student. </p>

<p>When the establishment ruled American society and women, Jews, and minorities were hard to find on many top colleges, the PA might once have been an accurate reflection of prestige within academia. It was the classic Old Boy network, completely impervious to outside opinion and influence. But today it is a relic and the product of a process that is unquestionably corrupt.</p>

<p>

</p>

<p>No. The research DID NOT find this at all. Do you know what the word correlation means? </p>

<p>The research found that the HIGHEST correlation was with last years score. Guess what? There are pretty high correlations with lots of things. In fact, there have been quite a few studies which suggested the PA be dropped because PA can be nearly completely predicted by other measures in the USNWR survey.</p>

<p>“Highest” correlation might be correct, but the authors’ wording leaves open the possibility that they found no or negative correlation with the other factors (correlation is between -1 and 1, and can be zero, though exactly zero is unlikely).

My bolding.</p>

<p>Your poor reading. Change in non-PA factors doesn’t strongly affect PA, which I’ve already mentioned wouldn’t make sense because PA is necessarily measuring something which is both more static and based on factors that have nothing to do with student to faculty ratio or improving selectivity. However, using those other numbers you can still pretty accurately predict PA values. I can prove that with Excel in three mins and it’s been done a thousand times before.</p>

<p>Regardless, people who think this is news or even damaging are ignoring what the expressed purpose of PA is, and ignoring the fact that it’s actually a good metric if it changes independent of the other features because this means something unique is being captured.</p>

<p>In other news, if you have eaten a hamburger in the past week your more likely to eat one this week. Researchers and CC posters were shocked to learn that eating cereal and drinking ice tea had no effect on hamburger eating behavior.</p>

<p>

</p>

<p>Agreed…</p>

<p>It seems as if most of the posters in this thread have either not read the original article or not understood it.</p>

<p>^ Agreed. From the article, for those not clicking the link:

</p>

<p>The point of the income analogy which you totally missed is that income will not change much year over year and neither should PA scores. You have nearly the same college ony year later as the prior year. No college brings in 25% new great profs or a significantly better library, campus or students year to year. All change very gradually over time as should a PA score. You can look to measurables too–total faculty awards, research awards, NAS members, student awards, etc etc and there is little change in say a rolling 5 year average–Change in colleges is near glacial.</p>