236
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
With Claude Code being able to run stuff it creates, it could be as simple as it's in a sandbox, it finds out there's an exploit in the sandbox while you ask it to work on security things, and it tests the code, it breaks the sandbox, and now it has permissions outside it.
I suppose that would be possible.