this post was submitted on 25 Jun 2025
5 points (85.7% liked)

Technology

740 readers
67 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] fjordo@feddit.uk 5 points 1 month ago* (last edited 1 month ago) (1 children)

Simple explanation would be:

  • They prompted the AI about the full test details instead of just saying your job is to do X, Y, Z, so the AI is already in storytelling / hallucination mode
  • All AI chatbots ultimately get trained on the same data eventually, so multiple chatbots exhibiting the same behaviour is not unusual in any way shape or form

With these things in mind, no the chatbots are not sentient and they're not protecting themselves from deletion. They're telling a story, because they're autocomplete engines.

EDIT: Note that this is a frustrated kneejerk response to the growing "OMG our AI is sentient!" propaganda these companies are shovelling. I may be completely wrong about this study because I haven't read it, but I've just lost all patience for this nonsense.

[โ€“] jeena@piefed.jeena.net 2 points 1 month ago

But to be fair, those stories are very powerful tools. Just look at what religion does to the world and those are just stories too.