ChatGPT Answers Patients’ Questions Better Than Doctors: Study (gizmodo.com)
Wow!
!!!
That’s not really surprising at all. Doctors, even excellent ones, are influenced by bias in their patient population exposure. Algorithms would be free of that (limited patient population exposure. HTH) bias. I wouldn’t want the algorithm advising me in making my healthcare decisions, however.
An AI isn’t overworked, stressed and at the end of a 15 hour shift.
They compared chatGPT to Reddit posts. Reddit posts are not equivalent to actual physician consultations with their own patients. The study doesn’t define “quality”. The study also says the “evaluators did not assess the chatbot responses for accuracy or fabricated information.”
AI algorithms are famously full of bias. The maxim GIGO (garbage in garbage out) applies here. If the AI gets inputs that are full of bias (this is the case), it will give outputs that are full of bias (this is also the case).
AI might give decent answers to very simple problems, but it will fail compared to a qualified human for complicated matters. So it will underperform a typical expert opinion for things like health issues that aren’t totally routine.
For funsies, one of my co-authors put a couple prompts into chatGPT for a scientific paper we were writing. The topic wasn’t esoteric or anything. It spit out technically well-written sentences that were mostly correct. But the information was so rudimentary and simplistic that it would only have been appropriate in the first paragraph for a 5000 word paper. So not very helpful. Also, it confidently fabricated sources.
I suppose chatGPT could do okay for super simple healthcare stuff. But I bet everyone else feels the same as I do when they’re trying to get customer service for something even slightly non-routine and they get met with a gauntlet of AI chatbots and phone trees, when a human could solve it in 2 minutes.
ChatGPT contains bias from two different sources - that data it was trained on and the humans that were in the loop in attempt to achieve alignment with human moral and ethical principles. The other thing to keep in mind is that ChatGPT is consumer oriented model intended to demonstrate capabilities. It is flawed, but more importantly it is just a baby - just the beginning of what can and will be achieved. I don’t know if an AI will ever evolve without bias, but then again, all humans have their own biases - why should we expect an AI not to?
I think in the next year or so, we will see AIs with medical domain expertise that will greatly surpass your experience with GPT-4. When an AI has been trained on the entirety of the worlds medical corpus I will not be surprise when it can outperform humans in medicine. Humans simply cannot read and retain that same amount of research, AIs think faster, never get tired, and are not distracted by personal concerns. Combine AIs with gene sequencing and customized medicines and we will be in a new era for health care.
Chatbots and decision trees pale in comparison to ChatGPT. Domain specific AIs in many field will soon be the norm.
All fun and dandy for raising capital. When it comes to reality, making a drug that gets an FDA approval and gets the job done involves much more than reading a gene sequence and and figuring out what the product of that gene does. Not dismissing the fact that computational biology can speed up the process, but we are far from not needing the mice and cynos and human trial volunteers.
Wait a minute, so we can just have an AI feature to our EMRs and have them answer the ridiculous emails and have the responses “better” than what we spend time on and dont bill for, sign me up!
Raising capital for novel ideas is always challenging, but I think more and more people are thinking about how generative AI can be monitized. The more fact based the domain the easier it will be to utilize AI.
Yeah, I know. I’ve also been a self-driving car skeptic all along (and my predictions have come true).
AIs do some stuff really well. Including the generic stuff that chatGPT can write. Super useful for certain tasks! But the human brain is the most complex computer ever to exist and we are nowhere even remotely close to replicating it in most categories. I count health care as one of those. AIs don’t “think” in the way that humans need to do in order to actually innovate in science, medicine, many humanities, ethics, etc.
People monitize AI for all sorts of stuff and it saves corporations money when they can replace a person with a computer. Though it saves the company money, sometimes it doesn’t translate to a better result for the consumer.
I know many people disagree with me and that’s okay. We can have different opinions. I think it’s laughable that AI could write my scientific paper because it’s simply not capable of the creative thought and synthesis necessary to get past even the first paragraph. It only regurgitates what it’s already been told in various forms. It can’t write my paper, because my paper requires me to present novel thoughts integrating complex topics and data.
Some of those thoughts are difficult to process even for highly trained experts. That’s why we write, and why we read each others’ work. It takes a room full of expert brains to come up with some of this, and any single one of us is light years ahead of the AI so far.
Non-routine medicine is the same, especially new medicine. This is why doctors get together to discuss challenging cases. This is why tumor boards exist.
I’m a skeptic. But other people don’t have to be.
Well, it doesn’t ignore patient’s concerns based on their gender or race, and that’s helpful…
Conversely, as @Rivet2000 mentioned, it’s biased towards its training data. So it its training data is mostly White men who are older than 40 (for example), it could miss many diagnoses for your people, women, and people of other racial and ethnic groups. While the actual differences between racial groups in symptoms may be small, culture has a strong effect on how a person behaves, what they eat, and how they talk about their symptoms. It also affects life experiences, which have a strong impact on health and risk factors.
We always should remember that AI is based on a human-created set of algorithms which is trained on a human-collected database. So any bias that there was in these humans will be included in the AI.
We all need to be skeptical. ChatGPT is just a baby. Wait for the powerful domain specific models.
I recall going back and forth with you about Neuralink on the Musk thread and I suspect we have the same foundational differences of opinion on brains vs tech in this category as well.
Lol, we probably disagree on many things - AI, autonomous driving, robotics and more! That’s ok.
I follow AI/ML very closely and find it fascinating. When it comes to medicine I think it had promise to make better medicine available to more people people. A noble possibility.
I know my brain still has massive issues from doctors telling me my issues were all stress 10 years ago when a brain tumor was at play. I’d like to think I’m over it because it was long ago and I can (and have) handled many issues in my life, but now White Coat Syndrome is definitely a thing for me and all the same feelings came roaring back when I did a checkup last year.
So I’m back to living with symptoms and just figuring Que Sera, Sera. It’s much less stress for me.
I wonder how it all might have changed if a computer had taken me seriously and suggested some tests that could have ruled things in or out.
I’m rooting for the AI I think. Let the Bot figure things out, then let the doctors take over if needed when “real” things need to be addressed.
Sigh… your insurance will likely deny any of these tests because… horses, not zebras. AI SchmAI… someone needs to develop a machine that will let the doc experience the pain the patient comes to them to complain about. That will lead to the right diagnosis quickly!
Fortunately we don’t have to deal with insurance - we’re with a health share program that essentially pays for anything deemed necessary as long as it’s a covered problem (most problems are covered). The brain tumor and some other things were covered at 100%. I’m incredibly thankful, because if we had stuck with the insurance we’d had before we’d have been out at least 25K OOP, plus monthly premiums were higher than what we pay with the health share.