I agree with you that these rankings aren’t useful at all. They sell because too many of us prefer the simplicity of rankings over the messy work of figuring out the best fit for each one of us.
Journals, at least the top ones, do a better job at reviewing. However, acceptance rates at many of those conferences are really low. So I would say that selectivity at top journals is better, not higher.
It is a meaningless model created by people who know that it’s meaningless, but are trying to sell it anyways.
Just because a ranking is imperfect doesn’t make it “meaningless”. Both of those correlate enough with hiring patterns and CS rigor that they’re worth looking at (I’ve looked at OS finals at a few schools, including a couple of large flagships, and the more rigorous flagship is higher up in the salary rankings even though some people say the second flagship is “a good CS school”. Obviously, a difference of a few places or few thousand dollars may not mean much, but a difference of 50 places or tens of thousands in median salary below the top-end (where there may be idiosyncrasies) especially of larger programs in the same geographic locale is meaningful, IMO.
Some of them probably aren’t the ones that most people think of; for example, we recruit a lot of students from the University of Washington, Michigan, Georgia Tech, UIUC and Waterloo.
Most people? Most people in software definitely think they are great schools. I suppose most people in museum curation may not think of them as great schools but they aren’t hiring CS majors, most likely.
A model which is based on crappy data and on unsupported assumption is not useful, even if it predicts a small number of cases correctly. It’s like the proverbial broken clock.
The data is a biased sample out of an unreliable dataset. No matter how much a person wants to derive something meaningful from such a dataset with such a sampling method, they cannot. Just because their conclusions sound right to you does not mean that they are actually an accurate reflection of reality.
It’s Garbage In => Garbage Out, and just because they use all sorts of sciency terminology, it doesn’t mean that what they did is actual science.
I wrote exactly what they would have needed in order to make the claims that they made.
Again, just because a ranking system ranks colleges in a manner which seems reasonable to you, that doesn’t mean that the ranking system is meaningful in any real sense. Worse, a ranking system which fits the preconceived notions of a bunch of people, but is not based on a solid model with good data is likely a model which was developed backwards - they started with the rankings and then developed a model which would take whatever data they got and produce such rankings.
Finally, in order to make the claim that a ranking system “correlated with hiring patterns”, you need to have a good data set of hiring patterns. Furthermore, their model is supposedly based in part on what they claim are hiring patterns, so claiming that it correlated to hiring patterns is a tautology, and therefore absolutely meaningless. “Our model, which ranks people by height and weight, demonstrates a very high correlation between a person’s rank and their height”.
Finally, in order to make the claim that a ranking system “correlated with hiring patterns”, you need to have a good data set of hiring patterns. Furthermore, their model is supposedly based in part on what they claim are hiring patterns, so claiming that it correlated to hiring patterns is a tautology, and therefore absolutely meaningless. “Our model, which ranks people by height and weight, demonstrates a very high correlation between a person’s rank and their height”.
And if I cared about people’s height, that would be fine by me. Look, you can say “garbage in, garbage out” about any model or ranking. You can potentially improve any model or ranking. But there is no perfect ranking or model in this world. In the end, you have to make a judgement call about all of these rankings.
And if I cared about people’s height, that would be fine by me. Look, you can say “garbage in, garbage out” about any model or ranking.
Some studies have actually found that taller people are more successful in job markets: Standing tall pays off, study finds
The problem is less that a model is inaccurate and much more that it tells you something that isn’t correct. If the model calculates hiring based on a data base which is biased, and therefore has results which show college A placing more graduates than college B, even though college B actually places more graduates, the resulting ranking, which ranks A higher than B is not inaccurate, but wrong.
Even bad data is sometimes useful. So, if the LinkedIn data demonstrates that there a colleges which has 200 CS/engineering graduates a year has 2,000 graduates working in top tech companies, that means that it has good placement in top tech companies. However, we cannot say that it has better placement in top tech companies than a school with a similar amount of graduates, which have 1,800 graduates in top tech companies.
We certainly cannot claim that the first school has overall better placement is any tech company than the second school.
But these rankings make exactly these assumptions.
The only thing that we know for sure is that the number of unemployed engineers is pretty small and that of unemployed CS graduates is even smaller. This indicates that almost every school has really good placement. Even online diploma mills like Phoenix are having their graduates hired.
Trying to tease apart which exact school is better than others is just activity of a bunch of people who like creating ranked lists feeding into the egos of some college administrators and helping them further increase the number of applicants that they will reject…
The ranking that opened the thread is biased against schools that don’t offer PhDs. It leaves out two great programs, Cal Poly and Harvey Mudd.
In the other, the PayScale data is WAY off.
The most “objective” ranking from someone’s perspective is the one that conforms to his/her own.
But I’m different, I choose my ranking based on purely objective factors.
You believe me, don’t you?
I sure do.
much more that it tells you something that isn’t correct
But that’s true will all models/rankings.
Which goes back to what I was saying that ultimately, you have to make judgement calls. To someone who understands statistics and sampling, that means that schools being a few slots different in the rankings doesn’t mean much, but being something like 50+ slots different or earnings stated being double at one school vs. another does.
I still have more of a problem with rankings that simply omit some schools or places some of them clearly nowhere near what industry wisdom suggests. And you can always blend rankings to your heart’s content.
In the other, the PayScale data is WAY off.
In that case, how do you know the data is WAY off?
I know the PayScale data from my pet school ( ) for CS isn’t close to the school’s reported data and about 50% lower than what I know to be true from my son’s classmates.
Well, the Payscale data is collected through history, and CS salaries have risen quite a bit in recent years. But for comparative purposes, only the ranking/difference between schools matter. The College Scorecard would have more recent data. I’d recommend combining those 2 with the LinkedIn ranking, that one other ranking that also looked at LinkedIn and salary data, and any other outcomes-based rankings you may be able to find because yes, there would be idiosyncrasies, bad data issues, etc.
the Payscale data is collected through history
The PayScale report linked above says it was updated in 2020. Knowing what I know for 2019 grads from HMC, Stanford, and Cal Poly, it seems grossly low. Maybe it’s old or maybe it’s biased because it’s all self reported (although I’d expect that to bias high). In any case, it doesn’t seem to pass the sniff test.
“Updated for 2020” may simply mean 2020 data points were added to older ones. If all numbers are consistently low, then that means it’s still fine for comparative purposes.
In order to have any value, you have to assume it’s not completely garbage in garbage out. Go look at just how many data points there are, and then adjust for it being 20 years of data. The number of data points is simply so low that if you treated this like actual research, the confidence interval of the data I would bet would be near 30-70K of range, rendering it useless for comparison. It’s just too little data with too much variance.
Linkedin is more interesting because nearly everyone actually uses it. Few use the salary feature, but to see where grads work it’s a lot better and a more full data set due to Linkedin’s pseudo-role as a resume.
I have yet to find a CS salary source that’s not garbage in that isn’t a school specific survey. You can wish for the data to exist, but it’s just not there right now. Simply very few people use payscale or linkedin’s salary feature.
In this era of data collection, I wouldn’t share with a third party either. In fact, I would argue that well educated CS students would probably be less likely to use it due to better knowledge of data privacy.
I have yet to find a CS salary source that’s not garbage in that isn’t a school specific survey. You can wish for the data to exist, but it’s just not there right now. Simply very few people use payscale or linkedin’s salary feature.
This is true for any degree really.
I do like the Where They Work feature in LinkedIn. It’s very helpful to see the top 5 or 10 employers for any given school and major.