this post was submitted on 28 Aug 2025
527 points (99.8% liked)

Technology

74585 readers
3951 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ChaoticEntropy@feddit.uk 3 points 11 hours ago (3 children)

Yeah... whatever this is doesn't care if you're seeking to kill yourself, but does care if you ask something that isn't state sanctioned.

[–] JohnEdwa@sopuli.xyz 2 points 9 hours ago* (last edited 9 hours ago)

That is one of the fundamental flaws of machine learning like this, the way they are trained means they end up always trying to agree with the user, because not doing so is taken as being a "wrong" answer. That is why they hallucinate answers too - because "I don't know" is not an acceptable answer, but generating something plausible that the user takes as truth works.
You then have to manually try to reign them in and prevent them from talking about things you don't want them to, but they are trivially easy to fool. IIRC, in one of these suicide cases the LLM did refuse to talk about suicide, until the user told it it was all just for a fictional story. And you can't really "fix" that without completely banning it from talking about those things in every single occasion, because someone will find a way around it eventually.

And yeah, they don't care, because they are essentially just predictive text algorithms turned up to 11. Chatbots like ChatGPT and other LLMs are an excellent application of both meanings of the word "Artificial Intelligence" - they emulate human intelligence by faking being intelligent, when they in reality are not.

[–] Electricd@lemmybefree.net 1 points 9 hours ago* (last edited 9 hours ago)

You must have used ChatGPT a lot to say this, because that's completely false. There are safeguards for both things

[–] hotdogcharmer@lemmy.world 1 points 10 hours ago

And that is because they get their vast, innumerable sums of digital money from world governments! Human people are allowing an advertising and surveillance tool to Wormtongue its way into their heads and their lives because it breathlessly encourages and agrees with everything they think.

I just don't believe that our perceptions and ability to handle enthusiastic sycophantic agreement is evolved enough yet to combat something like this. I could see it being intoxicating to anyone for everything they say to be agreed with, confirmed, and called genius. I don't necessarily blame the people falling for it (though I do think adults who fall for it are a bit sad and need to grow up a bit), but it's definitely going to be massively convenient for governments to have their citizens just voice everything they're thinking.

Sort of like Minority Report but everybody says their own future crimes outright to a little robot butler instead.