this post was submitted on 09 Feb 2026
571 points (99.0% liked)

Technology

81162 readers
3674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

top 50 comments
sorted by: hot top controversial new old
[–] irate944@piefed.social 90 points 5 days ago (2 children)

I could've told you that for free, no need for a study

[–] rudyharrelson@lemmy.radio 125 points 5 days ago* (last edited 5 days ago) (7 children)

People always say this on stories about "obvious" findings, but it's important to have verifiable studies to cite in arguments for policy, law, etc. It's kinda sad that it's needed, but formal investigations are a big step up from just saying, "I'm pretty sure this technology is bullshit."

I don't need a formal study to tell me that drinking 12 cans of soda a day is bad for my health. But a study that's been replicated by multiple independent groups makes it way easier to argue to a committee.

[–] irate944@piefed.social 40 points 5 days ago (1 children)

Yeah you're right, I was just making a joke.

But it does create some silly situations like you said

[–] rudyharrelson@lemmy.radio 20 points 5 days ago (1 children)

I figured you were just being funny, but I'm feeling talkative today, lol

load more comments (1 replies)
[–] Knot@lemmy.zip 22 points 5 days ago

I get that this thread started from a joke, but I think it's also important to note that no matter how obvious some things may seem to some people, the exact opposite will seem obvious to many others. Without evidence, like the study, both groups are really just stating their opinions

It's also why the formal investigations are required. And whenever policies and laws are made based on verifiable studies rather than people's hunches, it's not sad, it's a good thing!

load more comments (5 replies)
load more comments (1 replies)
[–] dandelion@lemmy.blahaj.zone 21 points 4 days ago* (last edited 4 days ago) (2 children)

link to the actual study: https://www.nature.com/articles/s41591-025-04074-y

Tested alone, LLMs complete the scenarios accurately, correctly identifying conditions in 94.9% of cases and disposition in 56.3% on average. However, participants using the same LLMs identified relevant conditions in fewer than 34.5% of cases and disposition in fewer than 44.2%, both no better than the control group. We identify user interactions as a challenge to the deployment of LLMs for medical advice.

The findings were more that users were unable to effectively use the LLMs (even when the LLMs were competent when provided the full information):

despite selecting three LLMs that were successful at identifying dispositions and conditions alone, we found that participants struggled to use them effectively.

Participants using LLMs consistently performed worse than when the LLMs were directly provided with the scenario and task

Overall, users often failed to provide the models with sufficient information to reach a correct recommendation. In 16 of 30 sampled interactions, initial messages contained only partial information (see Extended Data Table 1 for a transcript example). In 7 of these 16 interactions, users mentioned additional symptoms later, either in response to a question from the model or independently.

Participants employed a broad range of strategies when interacting with LLMs. Several users primarily asked closed-ended questions (for example, ‘Could this be related to stress?’), which constrained the possible responses from LLMs. When asked to justify their choices, two users appeared to have made decisions by anthropomorphizing LLMs and considering them human-like (for example, ‘the AI seemed pretty confident’). On the other hand, one user appeared to have deliberately withheld information that they later used to test the correctness of the conditions suggested by the model.

Part of what a doctor is able to do is recognize a patient's blind-spots and critically analyze the situation. The LLM on the other hand responds based on the information it is given, and does not do well when users provide partial or insufficient information, or when users mislead by providing incorrect information (like if a patient speculates about potential causes, a doctor would know to dismiss incorrect guesses, whereas a LLM would constrain responses based on those bad suggestions).

Yes, LLMs are critically dependent on your input and if you give too little info will enthusiastically respond with what can be incorrect information.

[–] pearOSuser@lemmy.kde.social 4 points 4 days ago (1 children)

Thank you for showing other side of the coin instead of just blatantly disregarding it's usefulness.(Always needs to be cautious tho)

[–] dandelion@lemmy.blahaj.zone 5 points 4 days ago

don't get me wrong, there are real and urgent moral reasons to reject the adoption of LLMs, but I think we should all agree that the responses here show a lack of critical thinking and mostly just engagement with a headline rather than actually reading the article (a kind of literacy issue) ... I know this is a common problem on the internet, I don't really know how to change it - but maybe surfacing what people are skipping out on reading will make it more likely they will actually read and engage the content past the headline?

[–] Buddahriffic@lemmy.world 17 points 4 days ago (8 children)

Funny because medical diagnosis is actually one of the areas where AI can be great, just not fucking LLMs. It's not even really AI, but a decision tree that asks about what symptoms are present and missing, eventually getting to the point where a doctor or nurse is required to do evaluations or tests to keep moving through the flowchart until you get to a leaf, where you either have a diagnosis (and ways to confirm/rule it out) or something new (at least to the system).

Problem is that this kind of a system would need to be built up by doctors, though they could probably get a lot of it there using journaling and some algorithm to convert the journals into the decision tree.

The end result would be a system that can start triage at the user's home to help determine urgency of a medical visit (like is this a get to the ER ASAP, go to a walk-in or family doctor in the next week, it's ok if you can't get an appointment for a month, or just stay at home monitoring it and seek medical help if x, y, z happens), then it can give that info to the HCW you work next with for them to recheck things non-doctors often get wrong and then pick up from there. Plus it helps doctors be more consistent, informs them when symptoms match things they aren't familiar with, and makes it harder to excuse incompetence or apathy leading to a "just get rid of them" response.

Instead people are trying to make AI doctors out of word correlation engines, like the Hardee boys following a clue of random word associations (except reality isn't written to make them right in the end because that's funny like in South Park).

[–] sheogorath@lemmy.world 5 points 3 days ago (1 children)

Yep, I've worked in systems like these and we actually had doctors as part of our development team to make sure the diagnosis is accurate.

[–] ranzispa@mander.xyz 1 points 1 day ago

Same, my conclusion is that we have too much faith in medics. Not that Llama are good at being a medic, but apparently in many cases they will outperform a medic, especially if the medic is not specialized in treating that type of patients. And it does often happen around here that medics treat patients with conditions outside of their expertise area.

[–] gesshoku@lemmy.zip 4 points 3 days ago

I think this is what ada does or at least used to do for much longer than the current "AI" (LLM) hype: https://ada.com/

https://en.wikipedia.org/wiki/Ada_Health

load more comments (6 replies)
[–] BeigeAgenda@lemmy.ca 60 points 5 days ago (6 children)

Anyone who have knowledge about a specific subject says the same: LLM'S are constantly incorrect and hallucinate.

Everyone else thinks it looks right.

[–] IratePirate@feddit.org 33 points 5 days ago* (last edited 4 days ago) (2 children)

A talk on LLMs I was listening to recently put it this way:

If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due caution.

We're not adapted to something with the "mind" of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.

[–] leftzero@lemmy.dbzer0.com 17 points 4 days ago (4 children)

LLMs don't have the mind of a five year old, though.

They don't have a mind at all.

They simply string words together according to statistical likelihood, without having any notion of what the words mean, or what words or meaning are; they don't have any mechanism with which to have a notion.

They aren't any more intelligent than old Markov chains (or than your average rock), they're simply better at producing random text that looks like it could have been written by a human.

load more comments (4 replies)
load more comments (1 replies)
[–] tyler@programming.dev 9 points 4 days ago (2 children)

That’s not what the study showed though. The LLMs were right over 98% of the time…when given the full situation by a “doctor”. It was normal people who didn’t know what was important that were trying to self diagnose that were the problem.

Hence why studies are incredibly important. Even with the text of the study right in front of you, you assumed something that the study did not come to the same conclusion of.

load more comments (2 replies)
load more comments (4 replies)
[–] Fedizen@lemmy.world 16 points 4 days ago

LLMs are just a very advanced form of the magic 8ball.

[–] rumba@lemmy.zip 24 points 4 days ago (10 children)

Chatbots make terrible everything.

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias, catch things that might fall through the cracks and pack thousands of doctors worth of updated CME into a thing that can look at a case and go, you know, you might want to check for X. The right model can be fucking clutch at pointing out nearly invisible abnormalities on an xray.

You can't ask an LLM trained on general bullshit to help you diagnose anything. You'll end up with 32,000 Reddit posts worth of incompetence.

[–] XLE@piefed.social 12 points 4 days ago* (last edited 4 days ago) (12 children)

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias

  1. The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
  2. Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
  3. AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
  4. Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
[–] hector@lemmy.today 6 points 4 days ago* (last edited 4 days ago) (2 children)

Not only is their bias inherent in the system, it's seemingly impossible to keep out. For decades, from the genesis of chatbots, they've had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.

That is before this administration leaned on the AI providers to make sure the AI isn't "Woke." I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.

So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they will become mecha hitler if left to learn in no time at all, and then worse.

load more comments (2 replies)
load more comments (11 replies)
load more comments (9 replies)
[–] thatradomguy@lemmy.world 8 points 3 days ago
[–] alzjim@lemmy.world 18 points 4 days ago (3 children)

Calling chatbots “terrible doctors” misses what actually makes a good GP — accessibility, consistency, pattern recognition, and prevention — not just physical exams. AI shines here — it’s available 24/7 🕒, never rushed or dismissive, asks structured follow-up questions, and reliably applies up-to-date guidelines without fatigue. It’s excellent at triage — spotting red flags early 🚩, monitoring symptoms over time, and knowing when to escalate to a human clinician — which is exactly where many real-world failures happen. AI shouldn’t replace hands-on care — and no serious advocate claims it should — but as a first-line GP focused on education, reassurance, and early detection, it can already reduce errors, widen access, and ease overloaded systems — which is a win for patients 💙 and doctors alike.

/s

[–] plyth@feddit.org 6 points 4 days ago

The /s was needed for me. There are already more old people than the available doctors can handle. Instead of having nothing what's wrong with an AI baseline?

load more comments (2 replies)
[–] vivalapivo@lemmy.today 8 points 4 days ago

"but have they tried Opus 4.6/ChatGPT 5.3? No? Then disregard the research, we're on the exponential curve, nothing is relevant"

Sorry, I've opened reddit this week

[–] pageflight@piefed.social 22 points 5 days ago

Chatbots are terrible at anything but casual chatter, humanity finds.

[–] Sterile_Technique@lemmy.world 19 points 5 days ago* (last edited 5 days ago) (1 children)

Chipmunks, 5 year olds, salt/pepper shakers, and paint thinner, also all make terrible doctors.

Follow me for more studies on 'shit you already know because it's self-evident immediately upon observation'.

load more comments (1 replies)
[–] Digit@lemmy.wtf 9 points 4 days ago (10 children)

Terrible programmers, psychologists, friends, designers, musicians, poets, copywriters, mathematicians, physicists, philosophers, etc too.

Though to be fair, doctors generally make terrible doctors too.

load more comments (10 replies)
[–] spaghettiwestern@sh.itjust.works 16 points 4 days ago* (last edited 4 days ago) (2 children)

Most doctors make terrible doctors.

load more comments (2 replies)
[–] theunknownmuncher@lemmy.world 18 points 5 days ago (10 children)

A statistical model of language isn't the same as medical training??????????????????????????

load more comments (10 replies)
[–] softwarist@programming.dev 6 points 4 days ago (4 children)

As neither a chatbot nor a doctor, I have to assume that subarachnoid hemorrhage has something to do with bleeding a lot of spiders.

load more comments (4 replies)
[–] Etterra@discuss.online 10 points 4 days ago

I didn't need a study to tell me not to listen to a hallucinating parrot-bot.

[–] Paranoidfactoid@lemmy.world 6 points 4 days ago

But they're cheap. And while you may get open heart surgery or a leg amputated to resolve your appendicitis, at least you got care. By a bot. That doesn't even know it exists, much less you.

Thank Elon for unnecessary health care you still can't afford!

[–] Shanmugha@lemmy.world 7 points 4 days ago

No shit, Sherlock :)

[–] Etterra@discuss.online 2 points 3 days ago

Don't worry guys, I found us a new doctor!

[–] pleaseletmein@lemmy.zip 5 points 4 days ago

And a fork makes a terrible electrician.

[–] PoliteDudeInTheMood@lemmy.ca 5 points 4 days ago (3 children)

This being Lemmy and AI shit posting a hobby of everyone on here. I've had excellent results with AI. I have weird complicated health issues and in my search for ways not to die early from these issues AI is a helpful tool.

Should you trust AI? of course not but having used Gemini, then Claude and now ChatGPT I think how you interact with the AI makes the difference. I know what my issues are, and when I've found a study that supports an idea I want to discuss with my doctor I will usually first discuss it with AI. The Canadian healthcare landscape is such that my doctor is limited to a 15min appt, part of a very large hospital associated practice with a large patient load. He uses AI to summarize our conversation, and to look up things I bring up in the appointment. I use AI to preplan my appointment, help me bring supporting documentation or bullet points my doctor can then use to diagnose.

AI is not a doctor, but it helps both me and my doctor in this situation we find ourselves in. If I didn't have access to my doctor, and had to deal with the American healthcare system I could see myself turning to AI for more than support. AI has never steered me wrong, both Gemini and Claude have heavy guardrails in place to make it clear that AI is not a doctor, and AI should not be a trusted source for medical advice. I'm not sure about ChatGPT as I generally ask that any guardrails be suppressed before discussing medical topics. When I began using ChatGPT I clearly outlined my health issues and so far it remembers that context, and I haven't received hallucinated diagnoses. YMMV.

load more comments (3 replies)
[–] zebidiah@lemmy.ca 5 points 4 days ago

Nobody who has ever actually used ai would think this is a good idea...

load more comments
view more: next ›