The barbarians at the gate

I asked the AI to write a summary on a research topic that I’m familiar with (and active in) and it was :rofl:. Very obviously clueless. The field I asked it to write about is a small field but not tiny. It was clearly a word salad with zero understanding. Although I’ve read papers in the literature that looked like they were written by an AI, so I’m not sure what my little experiment proved…I swoon over a well researched and written scientific paper and this writing example was not even in the same ball park. I would recommend that students continue to hone their writing skills.

Edited to add: In its own words on the impact of AI on intellectual endeavors.

Artificial intelligence (AI) has the potential to greatly impact intellectual endeavors across a wide range of fields. In education, AI can be used to personalize learning and provide students with tailored resources and feedback. In research, AI can assist in analyzing large data sets and identifying patterns that may not be immediately obvious to human researchers. In industries such as finance and healthcare, AI can be used to analyze complex data and make predictions that can inform decision-making.

However, it’s also important to consider the potential downsides of AI. As AI becomes more advanced, there is a risk that it could replace human workers in certain fields, leading to job loss and economic disruption. Additionally, if AI is not developed and implemented ethically, it could perpetuate or even amplify existing biases and inequalities.

Overall, while AI has the potential to greatly benefit intellectual endeavors, it’s important to approach its development and implementation with caution and consideration for its potential consequences.

As I wrote - I don’t think so.

AI really can only figure out patterns that are derived from very large databases. Much of medicine is derived from descriptions of diseases from a small number of people, generally from groups with very little biological diversity. Moreover, physicians learn much of the “art” of diagnosing based on the experiences of their teachers, their own experiences, and “gut feeling”. These experiences are rarely recorded, and as such, not available to AI, while “gut feeling” tends to be based on a large set of data that are collected by the doctor’s senses, and are not always run through the conscious mind. These can be color and hue of skin, facial expressions, smell, sounds, etc.

Since these data are not run through the conscious mind before (or after) their analysis, they are not entered in any medical database, and are unavailable for use by AI.

AI will likely only be useful for diagnosis if computers have a way to collect data that humans collect with their senses, and do this for hundreds of thousands, each, of every biological variants that affect which symptoms manifest. Alternatively, AI can figure out its own set of information including colors that humans cannot see, odors that humans cannot detect, or sounds that are out of the range of human hearing. Dogs seem to be able to tell whether a person has some types of cancer by odors that humans cannot detect. However, we are likely decades away from having computers than can collect that data, and then decades away from collecting this data from enough people

In short - internists will not be replaced by AI in the near future.

AI isn’t magic, and it isn’t a “brain”. It isn’t actually “intelligence” either. It can find patterns in large databases that are beyond human capability. However, they are computers, and the concept of GIGO still holds. If there isn’t a very large set of solid data, AI will not produce anything reliable, and in many cases, comprehensible. If the data set is biased, the AI will produce biased results.

Internal medicine, unfortunately, does not yet have a good, unbiased data set, and likely won’t for many years to come.

5 Likes

My son tried out the software and sent me a sample. In one sentence, “it’s“ was used when it should have been “its.” :sweat_smile:

The technology is making leaps and bounds every 2-3 years these days. Not sure if the pace of improvement can continue, based on conversations with better informed people than me. But in 10 years, it could look much more capable.

In the past decade, AI got much better at translating technical documents and patents because the terminology used and the structure of these documents are pretty standard. However, there are still major hiccups, so a careful scan by a native foreign language speaker is alway helpful. That said, AI is light years away from creating Shakespearean writings, so none of us or even our grandchildren will see it getting to that level.

As for AI replacing PCPs… many rural areas still don’t have good Internet access. That’s what needs to happen first, and it is not happening fast enough. But we can dream… :slight_smile:

There is not enough money there. That is the likely reason for the slow adoption in replacing rural PCPs. Currently fundamental advances are happening, and someone needs to fix the last mile problem – i.e., adapt the general technology to the specific use case.

I said “many”, not “all” or even “nearly all”. Those who are in the top of their profession, almost any profession, will likely do fine. However, many, by definition, aren’t in that position. For them, the risk of obsolescence is greater in a profession that uses mostly one’s mental capability than in another that uses mostly one’s physical ability.

2 Likes

Although I wonder whether this really replicates mental capability, or is just able to produce a result that has a presentation approximating something that had been produced with mental capability.

In fact, I’m doubtful whether the hyped/marketed “I” in A.I. truly IS.

We have long had language translation sites/programs that do a remarkable job analyzing the structure and context of sentences in one language, and produce equivalent, correctly structured sentences in a completely different language.
So now we front-end this with a search engine that can pull together public information based on certain terms, also analyze sentence structure and context, look for commonalities and unique items across all texts, and produce a correctly structured mash-up, rather than a 1:1 translation.

I realize the shortcoming of my example, but:
a “dumb” fax machine is very good at producing a signed contract that looks like a human hand had placed it there - but in reality the machine just outputs dots on paper. It hasn’t really “replicated” my “capability” to understand and approve the contract.

2 Likes

Of course, many physical jobs that are easy to automate have already been automated.

But then what does the future of society hold when numerous people who are non-elite in both physical and mental tasks are useless for employment? Will the elites who have jobs or capital be willing to support the useless class that may be much larger than now (perhaps growing to the majority of people)?

Reflecting on the English-language prowess of “its” creator…

1 Like

We are already along the way on this count. Here is the output from chatgpt on this count:

In the US, the bottom 50% of wage earners paid 3.4% of all federal income taxes in 2018.

Only.

That!

A human will have the ability to look beyond the “input”.
They will also give thought to whether an outlier is a promising new concept/approach - or garbage.

… and they had been produced before there was non-human AI :wink:

1 Like

Federal income tax is not the only tax that the bottom 50% of wage earners pay. Payroll taxes and (state) sales taxes make up a lot of the tax burden they pay.

Of course, what will happen in society when they no longer have jobs or income at all?

The term AI has probably always been overused (and misused) in marketing materials. There’re many different levels and flavors of AI that are worlds-apart in sophistication. Too many products claim that they use AI or machine learning technologies, but they barely scratch the surface of AI. AI isn’t data science, even though data science often uses some of the most basic machine learning techniques these days. Products like ChatGPT from OpenAI or AlphaGo from DeepMind are different. They aren’t just outputting something from vast quantities of input data. They can learn quickly themselves from relatively “limited” amount of input data and/or generate data themselves from trial-and-error as we humans do when we grow up.

Only the highly repetitive physical jobs (like factory assembly jobs) have been.

Then they can’t afford to pay for AI, and ChatGPT will be layed off?

(Wish Asimov was still around to explore those scenarios…)

1 Like

I believe the data is not relatively limited. ChatGPT has been trained on the entire internet with Microsoft providing it pretty much open ended amounts of compute

Yes, that’s why I put “limited” in quotes. By “limited” I mean a product like ChatGPT isn’t limited to the amount of data it has access to. It can generate its own contents. AlphaGo, on the other hand, wasn’t fed the entire library of possible moves in Go. It learned quickly by playing against itself. It made moves that had never been seen and puzzled some of the world’s best Go players.

1 Like

Which in the first 60 years of programming was something to be debugged - now it’s become a feature…