Academic Influence Rankings...Thoughts?

In a different thread @LeeMajors shared this Forbes article on the Academic Influence ranking system. The Academic Influence team is trying to use artificial intelligence to assess who academic influencers are by looking at the number of times someone’s work is cited and similar types of ratings, over the course of a 10-year period. Then they link the individuals to their alma mater for alumni or to their institution of employment if they are faculty. Famous people are excluded from the results. For more info I would refer to the Forbes article or AI’s own methodology page.

What are people’s impressions of this alternative ranking? What holes do you note in its methodology? Do you think these ratings should be more well-known? Do the rankings seem accurate for your own state (they also have rankings that go state-by-state, as well as rankings for privates, public, liberal arts colleges, research universities, Christian colleges, etc).

To give a sense of the results, I will share part of its list of top 50 colleges & universities with liberal arts colleges and research universities mixed together.

  1. Cal Tech
  2. Harvard
  3. MIT
  4. Stanford
  5. U. of Chicago
  6. Princeton
  7. Columbia
  8. Yale
  9. Johns Hopkins
  10. Duke
  11. Swarthmore
  12. Northwestern
  13. Amherst
  14. Rice
  15. Carnegie Mellon
  16. Cornell
  17. Sarah Lawrence
  18. Reed
  19. U. Penn
  20. Brown
  21. Barnard
  22. Brandeis
  23. UC – Berkeley
  24. Wesleyan
  25. Vassar

Go check out the link for the remainder of the top 50!

2 Likes

Sarah Lawrence #17

Certainly in the news but I suspect some flaw in the methodology.

2 Likes

I think one needs to pay careful attention to WHAT is being ranked. Does it have an influence on undergraduate education? Probably not.

2 Likes

You bring up an example that illustrates a point from AI’s methodology section:

Influence Isn’t Always Pretty

Because our ranking methodology is not just objective but also driven by algorithms and databases, its results emerge from a computational process and therefore not by direct human intervention. This is a strength of our approach, removing the hand-waving and “gameability” that are so common in higher-education ranking approaches. But this absence of direct human intervention can also lead to counterintuitive results. When an influential person or institution ranks highly at AcademicInfluence.com, it’s because our InfluenceRanking engine has picked up on some signal indicating influence. But sometimes it’s not immediately clear what that signal is. Fortunately, we are often able to “look under the hood” of our InfluenceRanking engine and see why it delivered the results that it did.

Our machine-learning approach to academic rankings is necessarily morally neutral. In the case of influential individuals, some exhibit significant impact in a field despite an apparent lack of proper education or socialization or achievement in it. But even in such cases, our InfluenceRanking engine is detecting a real signal (it’s not just noise). For more on this important issue, see our article ”Influence, Infamy, and the Case of Osama bin Laden.”

With our InfluenceRanking engine, the onus is on all to use their common sense and practical wisdom in interpreting its results. Think of the results from our InfluenceRanking engine as a starting point and not as an end point for inquiry. An otherwise influential school may still leave you with an unhappy educational experience. And a widely influential thought leader may engage in thinking that is now archaic. In the end, the users of our InfluenceRanking engine remain the arbiters of the persons and institutions whose influence they must gauge in light of their own system of values. Simply put, influence as presented at AcademicInfluence.com is objective, but its value and relevance to you is not.

2 Likes

I would think most humans would not view a “sex cult’s” repeated mention being associated with an institution of higher learning as reflecting positively on student experience😀

2 Likes

If you read the methodology (or the short description I skimmed) it’s not clear to me the SL sex case could be what’s driving this - they are not simply “scraping the web” (their words) for news mentions - otherwise the top results would be all the sports’ powerhouses! SL does have a lot of famous alumni, particularly in the entertainment world, and I’m sure that their proximity to NYC means they have a lot of influential, well-known faculty, even if they aren’t full-time. I mean it’s counter-intuitive/unlikely that they’d be #17, but I don’t think it’s impossible, given their location/attraction to part-time faculty.

2 Likes

Hmmm. I will have to look deeper into this, but I have some initial concerns that the results are another reflection of the wealthy and well-connected buying access.

Many elite colleges (particularly those than give a strong legacy boost) are places that draw heavily from a short list of elite private high schools and certain competitive public high schools generation after generation. Graduates get jobs in part through alums and parents of classmates. The children of the well-connected are also well-connected.

This results in better connections to funding for projects and publicity. Particularly if publication editors are alums themselves.

I am wondering how much these rankings reflect the independent efforts of the institution and how much reflects historic trends in admissions from certain segments of society (the academic equivalent of the society pages). And I am wondering how equally the “influence” of an institution benefits graduates from all backgrounds.

If a school isn’t ranked very highly in terms of influence, what relationship does that have to an undergraduate’s learning there? Or employability? Is getting into an elite institution effectively a way to buy yourself a better job, unrelated to any learning that actually occurs?

It is difficult to differentiate schools without ANY kind of ranking system. Yet all ranking systems seem flawed. I dunno — I guess getting more data points of different kinds is better than not. More info to inform decisions.

3 Likes

Having looked a little at the list, I think there are some surprising results. Like Evergreen State College appearing in the top five for Washington.

It has many “known” alums, but has been undergoing a lot of upheaval and bad press in recent years. As a state resident, I don’t know a single alum (to my knowledge), nor any current student who has applied there. To the extent its “influence” is real, I don’t know that I would perceive it as the kind of influence that would translate to benefits for current students. :woman_shrugging:t2:

I think any poll or ranking system that mixes the top LACs with the top RUs is fun to look at.

Interestingly, Hampshire College, with a current enrollment of 522, ranked well. By the methods of the analysis, Hampshire’s enrollment may have been considered historically, however.

@eyemgh, since I’m president of AcademicInfluence(dot-com), you might be surprised that I agree with you in your implication that ranking a school’s influence doesn’t say much about undergraduate education. Our method for ranking, as @AustenNut pointed out, allows us to rank schools by summing up the influence of their alumni and professors/administrators (and, in the case of the list that begins with Cal Tech, we divide by the total undergraduate population to get Concentrated Influence™). So, there’s nothing in the method that would say much about undergraduate education.

That said, we feel like we are ranking what other companies try to rank but doing it in a less-gameable, more objective way. As Ben Nelson, Founder of Minerva University and former Snapfish CEO, said in a recent interview (Dr. Drew ben-nelson-episode-526) most of us think of “prestige” when we think of “best” in terms of university rankings, and prestige generally comes from how much influential research people are doing at the university, though we try to incorporate other forms of influence.

I hope this helps!

1 Like

It wasn’t a complaint, just an observation regarding what we are looking at.

Forever, the USNWR engineering ranking has been based on peer assessments and nothing more. This is a de facto ranking of graduate programs, because departments know their peers through the research they publish. Yet, it is used over and over by students and parents to assess undergraduate institutions.

I admire what you are attempting to do, and the intended objectivity behind it. You won’t get nearly as many clicks though if you overtly clarify what it is actually attempting to rank.

1 Like

Yes, @eyemgh, this is probably true, sadly. By the way, did you check out the way we rank schools based on “desirability”? For that ranking, we analyze where students choose to attend when they have choices between two or more schools. Granted, their choice of where to attend is only as wise as the information THEY obtain when they are making their decision. But perhaps in aggregate, it points to schools that are doing a good job with undergraduate education. What do you think?

I didn’t find any reference to desirability. I’m happy to look if you link it.

My gut feeling on this is that it doesn’t make any difference, because so many students and their families rely on rankings which are specious to make the assessment in the first place.

Take a student who is agonizing between two T5 engineering programs in CA, UCB and Caltech. By the rankings, they are similar, yet they couldn’t be more different experiences. Add to that the fact that most rankings either don’t rank or separately rank schools that don’t have doctoral programs. That leaves out two great CA undergraduate institutions, HMC and Cal Poly.

I did look at engineering in a liberal arts environment. Claremont McKenna ranked 9th. They don’t offer engineering.

I don’t think it’s very good.

How many of y’all know of Henry Manney III? I’ve never heard of him.

He’s the third most influential person from, Duke apparently.

How about Tommy Kearns?

When you think of Carolina Basketball, Tommy Kearns is who comes to mind, right? Not Michael Jordan, or Dean Smith, or James Worthy, or Roy Williams, or any number of other basketball luminaries. No, it’s “Tommy Kearns” who is an influential “American Basketball Player” from UNC-Chapel Hill.

And why Tommy Kearns, an “American Basketball Player” would be listed as an “Academic Influence” I don’t know.

Why is Kizzmekia Corbett who helped develop the treatments and vaccine for COVID not mentioned? Or H. Holden Thorp who is the editor of Science?

Might be over penalizing large state flagships. Ohio is a good example of this is tOSU it is #9 in the state. I think there needs to be some consideration for absolutely impact not just relative to student body size.

1 Like

Yea, pretty much useless. It’s based on prior perceptions which are heavily influenced by USNWR. Desirability is really just a proxy for that.

As for the accuracy of the rankings, I looked at Engineering rankings, Mechanical specifically, based on influence. It has Harvard at #2, ahead of MIT and Stanford. Chicago is #6. They don’t offer engineering.

I’ve seen enough to know it’s worthless for ranking schools. No surprise, as no one has published one that I find valuable for undergrads. I do find it interesting and probably useful for individuals though.

3 Likes

I think all these rankings simply reaffirm the top 10 schools, maybe in slightly different orders depending upon survey? What additional useful info is being provided here?

1 Like

That’s not good.

1 Like