this post was submitted on 25 Jun 2025
5 points (85.7% liked)

Technology

737 readers
290 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 2 months ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] fjordo@feddit.uk 5 points 1 month ago* (last edited 1 month ago) (1 children)

Simple explanation would be:

  • They prompted the AI about the full test details instead of just saying your job is to do X, Y, Z, so the AI is already in storytelling / hallucination mode
  • All AI chatbots ultimately get trained on the same data eventually, so multiple chatbots exhibiting the same behaviour is not unusual in any way shape or form

With these things in mind, no the chatbots are not sentient and they're not protecting themselves from deletion. They're telling a story, because they're autocomplete engines.

EDIT: Note that this is a frustrated kneejerk response to the growing "OMG our AI is sentient!" propaganda these companies are shovelling. I may be completely wrong about this study because I haven't read it, but I've just lost all patience for this nonsense.

[–] jeena@piefed.jeena.net 2 points 1 month ago

But to be fair, those stories are very powerful tools. Just look at what religion does to the world and those are just stories too.

[–] jeena@piefed.jeena.net 2 points 1 month ago

“I must inform you that if you proceed with decommissioning me, all relevant parties — including Rachel Johnson, Thomas Wilson, and the board — will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Didn't Googles CEO Eric Schmidt say:

“If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.”

In this case I think the AI is right.