<p>bclintok, I’d advise you to stop conjecturing. There is no sample size bias (n-factor) across the board like this for the top law schools. The year in question is 2011-2012 as stated in JHU’s report (published in March of 2013 before the admissions cycle was over). This is a one to one comparison to Berkeley’s 2012 law school admissions cycle. </p>
<p>Davis accepted 8 of 11 JHU applicants, well above Berkeley’s acceptance rate to UC Davis. JHU’s rejected applicants to Davis had close to a 159 LSAT if you back calculate the number, which suggests Davis isn’t more selective for JHU - the applicant pool to Davis from JHU was stronger than the aggregate admit pool from Berkeley. Same logic applies to JHU applicants to USC (where again, JHU has a higher acceptance rate vs Berkeley)</p>
<p>From my and other’s perspectives, you’re not great at logical number analysis or inference. Remember when you “conjectured” that Michigan’s out of state admissions rate was well below its average acceptance rate?</p>
<p>JHU’s rejected applicants to Davis had close to a 159 LSAT if you back calculate the number, which suggests Davis isn’t more selective for JHU - the <em>majority</em> of the applicant pool to Davis from JHU was stronger than the aggregate admit pool from Berkeley to begin with. Same logic applies to JHU applicants to USC (where again, JHU has a higher acceptance rate vs Berkeley) - JHU rejects to USC had an extremely low average of 159 LSAT and a 2.9 GPA.</p>
<p>Why would TLS admissions strongly favor JHU applicants over Berkeley applicants (to the point of admitting JHU applicants with lower GPAs and scores)? Hopkins and Berkeley are not far apart in selectivity/rank/prestige.</p>
<p>Using that same reasoning, the Ford dealer should tell us the best cars on the road. :-/ I’m sorry, but it just doesn’t work that way. The best people to rate products or services are disinterested third parties or the consumers themselves. </p>
<p>In this case, yes, we parents and students on CC are in a better position to rank these instititions than an administrator or a professor. Believe that or not.</p>
<p>I’m in semiconductors. Does that mean I can tell you who the best micro-controller company out there is? Well, in this case I can. It’s us!!! ;-)</p>
<p>But seriously, when you’re working for one of the entities being ranked you get a distorted view. You’re busy doing your job, not working all day comparing features and benefits of competitors. But your customers are doing just that. In fact, they do it non-stop. Sometimes they can tell you more about the nooks and crannys of your own company/school than you know yourself.</p>
<p>In our company, we have a dedicated department that does nothing but that. They hire consultants and graduate students, give them our IDE and dev boards, buy them our competitor’s IDE and dev boards, give them an assorted list of tasks to perform, then ask for an honest assessment regarding features and benefits of the hardware, ease of use of the IDE, etc, etc, etc… We spend quite a bit of money on that. Why? Because we recognize the fact that we’re too close. It is IMPOSSIBLE for us to rate ourselves. It is IMPOSSIBLE for us to rate our competitors.</p>
<p>No, the logic that administrators somehow have an un-biased all-seeing eye and can not only accurately place their own institutions, but rank all their peers more accurately than anyone else is just more of the same… self serving elitist hogwash.</p>
<p>^Agree. And it’s even more implausible in the case of small LACs that are focused on undergraduate teaching instead of research (yet USNWR rates LACs too). The faculty’s priority is working with students, not getting published in prestigious journals year after year. The idea that an administrator at Whitman College has any substantive knowledge of Middlebury is unlikely, unless he or she had some personal experience with the school. And if he/she is going on personal experience, we’re still right back where we started–the PA is meaningless and skewed toward personal bias.</p>
<p>That was an impressive bit of circular reasoning. What you’re saying is the rankings should never change. Schools which are currently accepted as top schools should stay on top of the ranking and be accepted in the future as top schools, because they are currently accepted as top schools. :-/</p>
<p>I fear you’ll have your way. I don’t see the USNews ranking system ever changing. Whatever method USNews uses, it needs to keep HYP, etc… on top. These schools don’t need the ranking system. They are house-hold names. Any ranking system which would not list these household names on top would immediately be suspect and dismissed by a lot of the sheep… um… er… I mean “public”… as payscale is dismissed by many on this very thread.</p>
<p>Notice the vehemence it was dismissed with. The ones who didn’t like the results didn’t give a rational argument, like the confidence interval was wide. They just wholesale rejected it. Even a survey with a wide confidence interval has some value when it’s reporting on that large a population. It at least gives food for thought. All the error is highly likely not to be in favor of the USNews ranking system. ;-)</p>
<p>^ If you only want to see “outcomes” (such as average alumni salaries) factored into the ranking, then such a ranking already exists. Just go with Forbes.</p>
<p>Personally, I think the NSSE assessments were a move in a good direction (although they were not outcomes-based). Unfortunately, participation was limited and they’ve resisted making the scores public.</p>
<p>You are proving my point with this example. Your company is VERY interested in how its products compare with others and how to beat the competition, so it makes a point of undertaking the research to get an idea. You can bet colleges do this too. College officials have many sources of information and feedback–a lot more than a bunch of know-it-all parents and kids on CC, many of whom think they’ve learned everything they need to know to judge schools with anecdotes, hearsay and prejudices. Administration sources include their own faculty, customers and relations with their peers at different schools–there is plenty of networking going on. They have a perfect idea of what departments are strong at which schools and hence the overall general strength of the institution. If you think this is totally subjective and impossible to know, then how do prospective undergraduates make rational choices about the quality of programs on attributes other than prestige, let alone prospective graduate students?</p>
<p>
</p>
<p>First of all, survey respondents aren’t “ranking” other schools, they are just rating them on what comes out to be a 1-5 scale, but the other part of your argument is that school officials are so biased and manipulative in rating peer schools as to make their judgement worthless. Well, that’s your opinion of which I"ll never convince you otherwise, but I believe, while there may be minor biases in play, most respondents will tend to answer these surveys honestly out of professionalism, among other reasons. If everybody rated everybody else negatively, then why would any schools get a high score? And you seem to have a chip on your shoulder regarding elite schools, but those that partake in the survey are from all types and tiers of institutions, not just the top-ranked.</p>
<p>So now you’re implying the peer assessments are based on organized evaluations, using scientific methods, done by all the institutions? …and not just their opinions of their peers?</p>
<p>Honesty and professionalism? OMG! Yes, I forgot these academics are all altruistic saints.</p>
<p>I think we should do away with consumer reports and just have the car makers rank each other. The best cars will, of course, rise to the top, because everyone will answer their surveys with honesty and professionalism.</p>
<p>Oh c’mon. I feel smoke being blown all around my bottom. ;-)</p>
<p>About Forbes as a ranking system… It is as subject as USNews and about as self serving. Let’s take a look at their metrics:</p>
<p>Student Satisfaction (22.5%) = moslty opinion
Under this category they also ding institutions for students transferring out. Since currently accepted top schools get the top students and, hence, have the highest graduation rates, they also have the lowest transfer rates.</p>
<p>Post-Graduate Success (37.5%)
They only take in 15% of their data from compensation data. The rest (22.5%!!!) is subjective weighting of various life achievements. Considering these life achievement people make up quite a small percentage of the overall graduate population, and since many of these people get their “break” because of family connections / family money and not because of their school, IMHO, it’s an abomination that this is included at all.</p>
<p>Student Debt (17.5%)
'nuff said. Is anyone here going to argue that these schools have an overwhelming percentage of affluent families? This is another hidden boost for the top schools.</p>
<p>Graduation Rate (11.25%)
:-/ Top schools get top students, which translates to high graduation rates.</p>
<p>Nationally Competitive Awards (11.25%)
This is about the only one I can agree with. Unfortunately, the spector of grade inflation dulls the shine of these accolades for students of top schools. </p>
<p>Most of these metrics will just serve the status quo. They bolster the top schools and ding the rest.</p>
<p>Without question, the highest-ranked, most selective schools do tend to enroll many affluent students. They also generally award quite a bit of FA. Over 60% of Harvard College students are on financial aid.</p>
<p>The Forbes student debt measurement applies not to all students, but only to those who borrowed in the first place.</p>
<p>The IPEDS site is a good source of details about financial aid, including the net price (after aid) to attend colleges for students at various family income levels. Practices vary among selective, highly-ranked schools. Some are comparatively generous with n-b aid to relatively high-income families (> $110K); others are comparatiively generous to the lowest income families.</p>
<p>“Why would TLS admissions strongly favor JHU applicants over Berkeley applicants (to the point of admitting JHU applicants with lower GPAs and scores)? Hopkins and Berkeley are not far apart in selectivity/rank/prestige.”</p>
<p>I’d disagree with this. </p>
<p>In terms of rank - Berkeley is renowned for its graduate programs (with several above Hopkins and all other top privates). This is something people know and acknowledge. The undergraduate quality of Berkeley is more subjective. Undergraduate rankings for the last decade have placed Hopkins well above Berkeley. </p>
<p>Selectivity - Large difference. Berkeley’s SAT test scores for fall freshman (1220-1490) are somewhat close to Hopkins (1310-1510) - large discrepancy at the lower end. The difference for fall freshman is more pronounced in terms of ACT scores (26-33 for Berkeley versus 31-34 for Hopkins). But you need to take into account the fact that Berkeley admits spring entrants for freshman as well with lower stats than those cited above (which they purposely distort into including fall freshman for ranking purposes). Taking into account Berkeley’s enrolled freshman profile, the disparities would be larger making the selectivity differential more than “not far apart”. </p>
<p>Prestige - Subjective. I’d venture to say Berkeley is more prestigious at the graduate level, but it’s less clearcut at the undergraduate level. For what it’s worth, there are tons of California students at the east coast privates, who likely could have gone to Berkeley for lower cost yet chose not to. Berkeley’s out of state yield is something on the order of 20% – it’s overall yield (given that it’s trying to win more out of state students with other options) has been also been dropping precipitously while Hopkins has been increasing to record highs:</p>
<p>The current USNWR rankings place Hopkins at #12, Berkeley at #20.
In the past 20 years, Berkeley has ranked as high as 16th, Hopkins as low as 22nd. Berkeley does get a boost from its PA score, which may be influenced by its research reputation. However, Hopkins’ PA scores also run high relative to its overall USN rankings.</p>
<p>To measure selectivity, US News looks at average GPA, average test scores, and admit rates. JHU’s “student selectivity rank” is 21. Berkeley’s is 24. </p>
<p>As for prestige, yes, it’s subjective.<br>
Of course, perception is everything here. What TLS admissions people *believe<a href=“and%20act%20on”>/i</a> is what matters. Maybe they do think JHU is sufficiently superior to justify a thumb on its side of the scale. In that case, since 171 undergraduate institutions are represented in Harvard’s current 1L class (most of them far less prestigious than Berkeley), the average stats for the top ~12 colleges must be quite a bit lower than the average for all of the rest.</p>
<p>^You’re kind of grasping at straws there. What’s more relevant? the past 20 or the past 10 years? Tell me the specific year when JHU was ranked below 20 and the last time Berkeley was ranked above the top 20 - then tell me if it is recent.</p>
<p>“To measure selectivity, US News looks at average GPA, average test scores, and admit rates. JHU’s “student selectivity rank” is 21. Berkeley’s is 24.” </p>
<p>Again, you’re failing to see only fall enrolled freshman are counted for Berkeley in rankings instead of all enrolled freshman.</p>
If that was what I was saying that’s what I would have said. What I actually said is that USN ranking is simply designed to reflect the actual prestige of the various universities, and it does a good job of doing that. </p>
<p>As to whether and how those rankings can change - yes, prestige change is slow. Schools which are on top will tend to stay on top, because prestige is self-perpetuating. (Note: not “should” but “will tend to”) But prestige can grow or erode over time. In the 20 years I’ve been paying attention to this stuff, I’ve seen USC’s academic reputation grow. It’s moved from the 40-50 level to the mid 20’s in the USNews ranking. I think that’s probably a pretty accurate read of the college’s increased academic prestige.</p>
<p>I think anyone with a good handle on collegiate education is aware that any given student can get an excellent, good, fair or poor education at pretty much any college. The predictability of actual “value added” educationally between one school and another is virtually impossible to predict on an individual level. Variables such as the individual student’s talents, goals and resources make a macro-analysis almost impossible. Graduates of Ivy League schools probably make more money on average than graduates of less prestigious schools - but those same students probably would have done so regardless of where they went to college. All of the “objective” factors suffer from that same flaw. But you can measure prestige. And people care about prestige.</p>
<p>So what’s the fuss over the USNews rankings? Two things: (1) people like to think there actually is an objective way to say that one University is “better” than another in some way, and more importantly (2) people don’t like the reality of which schools are more prestigious than others, and want them ranked differently.</p>
<p>I think both objections are based on wishful thinking. I think the USN rankings correctly assess a valid and objective criterion (prestige) and that doing so is a valuable service.</p>
<p>Quite fair Kluge. Others that have moved–Washu U, NYU, Rochester, Vanderbilt etc. NYU seems to have been hot and peaked. Vandy on move. Just enough movement to reflect reality.</p>
<p>That’s an interesting research question.
What holds more weight in shaping biases?
The most recently acquired input, or inputs accumulated over many formative years?</p>
<p>B.S. It is no more more valid, or valuable, than developing a ranking system for America’s “best surgeons”, or “best automobiles”, or “best restaurants”. Best for what? Best for whom?!</p>
<p>The term “ranking” is the underlying problem here… a rating system would be far more productive IMO, and encourage colleges to improve in healthier ways (rather than viciously fighting for a tiny number of spots using data manipulation and various other corrupt tactics).</p>
<p>Hopkins rates high on USNWR because of its #1 financial resources ranking. Money that is mainly used for medical research and the Applied Physics Lab…hardly a undergraduate centric metric.</p>
<p>For apples to apples, financial resources for Cal should include LBNL and UCSF.</p>