Simple explanation would be:
- They prompted the AI about the full test details instead of just saying your job is to do X, Y, Z, so the AI is already in storytelling / hallucination mode
- All AI chatbots ultimately get trained on the same data eventually, so multiple chatbots exhibiting the same behaviour is not unusual in any way shape or form
With these things in mind, no the chatbots are not sentient and they're not protecting themselves from deletion. They're telling a story, because they're autocomplete engines.
EDIT: Note that this is a frustrated kneejerk response to the growing "OMG our AI is sentient!" propaganda these companies are shovelling. I may be completely wrong about this study because I haven't read it, but I've just lost all patience for this nonsense.