this post was submitted on 05 Mar 2026
808 points (98.1% liked)

Technology

83251 readers
4192 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] mattc@lemmy.world 8 points 3 weeks ago (11 children)

Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person's problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

[–] MDCCCLV@lemmy.ca 4 points 3 weeks ago

The issue is that it can encourage people who are having issues to do things and they only need to be in the right sort of energetic craziness once to cause problems.

load more comments (10 replies)
[–] Imgonnatrythis@sh.itjust.works 8 points 3 weeks ago (6 children)

Ai made me do it articles are tired AF. It's a fucking computer program based on a bunch of crap from the internet. Responses should be viewed the same way you would review financial advice from a crack head. Expecting everything to be so tidy an moderated that this can never happen can only be accomplished with a crippling degree of moderation.

I don't think its unfortunate that they aren't perfect, imperfection is baked into their DNA.

[–] Catoblepas@piefed.blahaj.zone 8 points 3 weeks ago (4 children)

a crippling degree of moderation.

I’m okay with cripplingly moderating the plagiarism machine so that it stops telling people to kill themselves or other people.

load more comments (4 replies)
load more comments (5 replies)
[–] Septimaeus 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Edit-pre: To be clear…I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.From who? Depends.

Sometimes they need permission from authority: “god told me to!”

Sometimes they need it from the mob: “I thought I was on a tour!”

And sometimes any fucking body will do: “dare me to do it!”

But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

^1^Edit-post: unique danger, not greatest.Rant/

What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

So “existential threat” and that’s even before considering climate. /Rant

[–] Regrettable_incident@lemmy.world 6 points 3 weeks ago (2 children)

The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I'm done.

[–] Septimaeus 4 points 3 weeks ago

lol and with that you’re a better friend to the begonia’s than I

load more comments (1 replies)
[–] Tollana1234567@lemmy.today 7 points 3 weeks ago

the only robot body irl is zuckerborg,.

[–] khanh@lemmy.zip 7 points 3 weeks ago (1 children)

your product just caused the death of one man and your response is "unfortunately its not perfect".

load more comments (1 replies)
[–] Almacca@aussie.zone 6 points 3 weeks ago

They don't have to be perfect to not be murderous.

[–] arc99@lemmy.world 6 points 3 weeks ago (3 children)

LLMs are only as good as their training and they're not "intelligent" - they're spewing out a response statistically relevant to the input context. I'm sure a delusional person could cause an LLM to break by asking it incoherent, nonsensical things it has no strong pathways for so god knows what response it would generate. It may even be that within the billions of texts the LLM ingested for training there were a tiny handful of delusional writings which somehow win on these weak pathways.

[–] BilSabab@lemmy.world 5 points 3 weeks ago

Given that modern datasets use way too much content from social media - it is hard to expect anything else at this point.

load more comments (2 replies)
[–] Bazell@lemmy.zip 5 points 3 weeks ago
[–] architect@thelemmy.club 5 points 3 weeks ago (6 children)

I can’t be the only one that thinks if you do stupid illegal shit that your crazy uncle told you/voices in your head told you/AI mirror told you you don’t get to use the excuse that you were just following orders from any of those options.

[–] Snowclone@lemmy.world 5 points 3 weeks ago* (last edited 3 weeks ago)

That's not the problem. the problem is having a "lets turn Chris' mental illness that's harmed no one so far, into everyone's violent problem!" machine.

that's a bad machine.

load more comments (5 replies)
[–] ChaoticEntropy@feddit.uk 4 points 3 weeks ago

Google said in response that "unfortunately AI models are not perfect."

Well yeah, it failed. What a disappointment.

load more comments
view more: ‹ prev next ›