this post was submitted on 07 Mar 2026
937 points (99.1% liked)

Technology

83295 readers
5320 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 33 comments
sorted by: hot top controversial new old
[–] chunes@lemmy.world -1 points 3 weeks ago* (last edited 3 weeks ago) (6 children)

Fuck the hell out of this.

My brothers in christ, I'm not going to drink bleach because the chat bot tells me to. I'm trying to come up with diagnostic ideas to discuss with my doctors, and it's invaluable for that.

load more comments (6 replies)
[–] AmbitiousProcess@piefed.social -1 points 3 weeks ago (5 children)

I'm not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it's probably the flu and tells me to mask up for a bit, that's probably gonna be better than that person being told "I'm sorry, I can't answer that"

At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

I feel like I'd much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else's accidental misinformation?

But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

I'm not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be "you can't know", and they're not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.

[–] TropicalDingdong@lemmy.world -4 points 3 weeks ago (2 children)

Itt thread: People with absolutely no fucking clue about what the consequences of their emotional response of "ai bad" will actually result in.

load more comments (2 replies)
load more comments (4 replies)
[–] d3adpaul77@lemmy.org -4 points 3 weeks ago (6 children)

we don't want the plebs getting around our carefully constructed cartels...

load more comments (6 replies)
load more comments
view more: ‹ prev next ›