If these rankings came out every 10 or even 5 years, there might be some validity to the ascents and descents of individual schools. But a “new” ranking every year is just nonsense. Unless a school experienced a natural disaster that leveled the campus or a sudden financial calamity that destroyed its endowment (or a massive donation that greatly increased it), not much is going to change from one year to the next. It takes 5-10 years to really see a meaningful movement in the quality of a school. The top schools in the 1980s, when USNWR first started this game, are still, more or less, the top schools and will likely be so 30 years from now. Only a handful of schools have either risen or fallen dramatically in that time.
@Much2learn said: "Well, that is another funny thing. If [US News] were to say, “We don’t care about salary, we are all about grad school admissions… if you shift the criteria that way, then I think they will still be stuck behind your Maroons!”
That’s, indeed, EXACTLY what US News is doing, and it’s why UPenn is 5 spots behind UChicago in the rankings now. US News values avg. SAT scores, and Chicago is always ahead on that - precisely because they care more about getting people ready for grad school admissions (where this sort of aptitude can be a plus). If they weighted this even more, Chicago would go higher up, and Penn would go lower down.
They could turn around and use salary data, but I suspect they don’t because that data would jigger the rankings too much - I’d imagine Caltech, CMU, MIT, Harvey Mudd, Georgia Tech, etc. would zoom up, and the ranking would look like it was leaning too heavily to the STEM-focused schools.
Optics matter to US News - they want a ranking that is as digestible as possible. If every year CMU is #8 and Brown is #32, or Harvey Mudd is #1 and Williams is #14, they’d lose traction. US News needs to reinforce general perception, or lightly challenge/provoke it - not put it on its head - to stay relevant.
(I think one year Caltech was #1 when US News modulated it’s formula, and before you knew it, they went back to the formula that had Harvard or Princeton on top. That’s never happened again. The most surprising thing at the top before Chicago has been Duke at #3 one year, and the very next year Duke fell. My biggest surprise had been seeing Chicago at #3 for two consecutive years - I think that’s misaligned, and something US News would have “fixed,” given past history with Caltech and Duke.)
@pupflier “Both Harvard and Stanford have great reputations and highly ranked business programs, if you plan to get an MBA and have been admitted to either one of them, it makes sense to take that offer because these schools do admit a significant percentage of their undergrad alums in their MBA programs( choosing between them is purely personal decision at this point)”
Neither Harvard nor Stanford have undergrad business schools, so you’d have to eliminate them if you’re considering a undergrad business major.
“With Princeton you are competing as an outsider for every MBA school, albeit with a very strong brand. That is why it is in the same tier as Duke and Northwestern which have good undergrad reputations and solid MBA programs as well”
Princeton is not in the same tier as Duke or Northwestern wrt MBA and undergrad business. For business schools, Fuqua and Kellogg are much more prestigious brands than Princeton. Princeton is superb but if you want to get a feel of how business works, you’re better off attending Duke or NW. For colleges with undergrad b-schools, you’re better off attending Notre Dame, Penn, Michigan, Berkeley.
“Once you go past the M7 super elite schools for MBA, you are in a different tier although Yale might break into this tier in a decade or so, displacing Columbia, given all the changes happening there.”
You seem very influenced by arbitrary groups and rankings, which is not how people who hire MBAs think. They really don’t care about M7, or ivy league undergrad and private universities like you do, and the fact that there are no public schools in M7 should you make you think a little about its credibility.
Your advice is very misguided on this, and that’s being kind. Top business schools look for work experience, especially if you have managerial experience or have grown in your career. They want to make sure of course that your academics are in order, but the main reason b-schools admit is what you can contribute to your classmates
FWIW and I know it’s a sampling of one, but a young adult female at my place of employment, who graduated Stanford as an undergrad, was rejected by Stanford’s business school. Fortunately, Harvard’s business school accepted her and she moved back to Boston a few weeks ago.
There have been a number of references to the “logarithmic adjuster” that USNWR added in 2000 to move Princeton back atop the rankings and move Caltech back into 4th place. This was originally called out in a Slate article (http://www.slate.com/articles/news_and_politics/crapshoot/2000/09/cooking_the_school_books_yet_again.html). The article is interesting, but I think many people are led to believe that using a logarithmic adjustment is inherently inappropriate, when in fact it is not. A log transformation is commonly used to correct/normalize a highly skewed distribution (see, e.g., https://www.r-statistics.com/2013/05/log-transformations-for-skewed-and-wide-distributions-from-practical-data-science-with-r/), which is precisely how it was used in this case (for spending per student).
Another example might be illustrative: Many people find the yield-to-admit ratio (YTAR) to be a useful metric to use in ranking schools. But by just comparing the values for that measure, Stanford (YTAR last year at 18.55; see http://talk.collegeconfidential.com/college-admissions/1898418-2020-yield-to-admit-ratios-for-colleges-ytar.html) would be over twice as good a school as Chicago at 8.35). To correct for this, taking the log of each gives new values (1.268 and 0.922 respectively) that are much more usable in making numerical comparisons.
So the real issue is not the use of the logarithmic adjuster per se, but rather whether it was done disingenuously for the purpose of altering the ranking to get the desired result.
“So the real issue is not the use of the logarithmic adjuster per se, but rather whether it was done disingenuously for the purpose of altering the ranking to get the desired result.”
you answered your own question.
after howls from the displace ivies… .nothing says “objective” like an "adjuster’ that’s “logarithmic”
Caltech btw also has the smartest student body in the country with the highest SAT scores. but of course they are an “outlier” and no one wants to go there especially STEM students from the bay area apparently.
the second smartest student body… well that’s Chicago of course with the second highest SAT scores in the country.
but of course they are an “outlier” and they “market too much”… .please add your simplistic rationalization here…
don’t worry USNWR I"m sure tweaks their “adjuster” (in addition to all their manufactured ties - I have not seen another university ranking system with all these ties btw) all the time so the results come out just right:)
Since this discussion is perpetuated by folks who are unhappy with the “fairness” of this specific ranking system then why not just ignore it?
The way to beat this is to seek out great colleges for our kids regardless of their rise or fall or relative ranking and let the rest of the gullible fight it out with their 20-30 rankings based applications.
sbballer has been known to breathe, sleeps (and dreams), eats and farts Stanford 24/7. He will not stop complaining about USNWR until Stanford is ranked #1 (or any other ranking system that doesn’t place Stanford at #1, for that matter) although that won’t ever happen. Each year when the new USNWR comes out is the most excruciatingly agonizing time for him. He needs to get a life outside Stanford for his own mental health and well being…
@foosondaughter I understand log-normal distributions and how one can often get a better statistical “fit” to a “family” of data using that distribution. But my question (possibly a really dumb one) is: why is USNWR fitting data at all? Each university has its own spending-per-student value, which is straightforward to calculate. If Caltech’s spending-per-student value is completely out of whack with those for other universities (i.e., the other data points in the family), so what?? Raw data values should be used in the formula that establishes the ranking, not anything derived from a statistical fit.
@pupflier “Both Harvard and Stanford have great reputations and highly ranked business programs, if you plan to get an MBA and have been admitted to either one of them, it makes sense to take that offer because these schools do admit a significant percentage of their undergrad alums in their MBA programs( choosing between them is purely personal decision at this point)”
@theloniusmonk “Your advice is very misguided on this, and that’s being kind. Top business schools look for work experience, especially if you have managerial experience or have grown in your career. They want to make sure of course that your academics are in order, but the main reason b-schools admit is what you can contribute to your classmates.”
The thing is that you are both right about this.
Yes, "Top business schools look for work experience, especially if you have managerial experience or have grown in your career."That is true.
However, it is also true, at least it was when I looked a couple of years ago, that the MBA programs at Harvard, Stanford and Penn especially admit the most students from their own undergrad, and then from the other two schools. That will not get you in by itself, but it does help. They seem to do a bit of “You scratch my back, I’ll scratch yours.”
CalTech and Chicago have the highest average SAT because they don’t recruit athletes.
@hzhao2004 “CalTech and Chicago have the highest average SAT because they don’t recruit athletes.”
Partially, but at least in the case of Chicago, they weight test scores a lot higher most school in the admission process. That is why they don’t release gpa data in their CDS. In contrast, Penn weighs gpa more heavily but is more flexible on test scores. I think on average, Penn’s approach makes better admission decisions, but Chicago is getting a bump in the rankings.
I am curious about what spending numbers are used for Caltech by USNWR. Is that data available to people who pay to unlock some data?
Caltech’s operating expenses for 2015-2016 were about $2.5 billion all in, but that includes $1.87 billion for JPL expenses, which comes directly from the same amount granted to JPL by the US Gov’t. (For 2017-18, word is JPL will get $2.1 billion.)
http://finance.caltech.edu/documents/473-fs_15_16.pdf
Certainly, the presence of JPL is of value to Caltech undergrads in a number of ways, even beyond easy access to undergrad research there. But, perhaps the inclusion of JPL in Caltech’s budget in some ways justifies the “logarithmic adjuster.”
If you subtract out the JPL budget, Caltech’s spending per student (undergrad+grad) is about $287,000/year.
Perhaps Berkeley’s budget includes the budget for Lawrence Livermore National Lab, and so on for other national labs that are managed by universities. And, as mentioned, the budgets of many universities (but not Berkeley) include medical schools.
My kid is really looking forward to it, and leaves in 2 days. =((
@whatisyourquest Actually, although often used together, using a data transformation isn’t the same as doing a curve fit. The purpose of a transformation is to alter the distribution to facilitate use, analysis, understanding and/or interpretation of the data while curve fitting is used primarily for modeling and extrapolation.
Log transformations are commonly applied to percentages and ratios (such as the aforementioned spending/student metric) because those are often non-normal and values don’t evenly reflect differences (especially at the “tails”). Standard inferential statistical tools and concepts (e.g., “z-scores”) are not appropriate (or are difficult to interpret) for non-normal distributions.
If a ranking system based on a maximum score of 100 points indicates 20% of its system is based a particular source of information, that is somewhat deceptive because it doesn’t explain how that source’s information is apportioned within the associated 20 score points. So (to use my YTAR example again), if I just apportioned the 20 points in the same manner as the raw YTAR value, Stanford would get the maximum 20 while Chicago would only get 9 points. Most other schools (with YTAR values less than 1) would receive 0 or 1 point. All of this doesn’t mean the the YTAR is useless – only that its distribution should be altered before being directly leveraged.
[edited to remove link to Data - Wikipediatransformation(statistics)#Reasonsfortransforming_data that didn’t render correctly]
According to USNWR, they utilized a set of 15 unique ranking criteria, but no school appears to have more than 5 shown on their respective page - without the complete data matrix this is no more valid than the black box that Forbes utilizes for their rankings.
@foosondaughter Got it. USNWR is using a data transformation, not a curve fit, which I had mistakenly assumed. I guess what I still don’t understand is why a data transformation is justified because the data are non-normal. Even though a curve fit is not being used, it seems that the data are being altered to fit a normal distribution. Regarding your YTAR example, I don’t see the problem with Stanford getting 20, Chicago 9, and most other schools getting 0 or 1. Let the chips fall where they may.
Anyway, I’d have to dig into it more to understand, and I don’t want to hijack the thread, so I’ll not pester you with more questions. However, if you have a link that provides details of the USNWR data transformation and it’s effect on rankings that would be very helpful. Thanks very much for the explanation!
@hzhao2004 @Much2learn Also don’t put too much weight into Cal Tech and Chicago having the highest test scores. There’s are higher because they superscore where a school like Yale does not.
@Ynotgo Schools have to report three categories of spending per FTE to IPEDS. US News likely uses this data or something very similar.
You can search for the IPEDS data, but you can also use this link to compare schools. This link is useful, if a bit dated (2014 data), to do a comparison between several schools. Looking under the “Funding and Faculty” tab.
Note that Stanford spends about the same amount as Caltech per FTE.
http://www.■■■■■■■■■■■■■■■■■■/search1ba.aspx?institutionid=110635,186131,243744,110404
Is that log 10 or log e?