236
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
It's not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it's more advanced than previous models, which weren't able to.
Part of Anthropic's schtick is that they claim to be developing AI "responsibly," and "ethically," and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don't get out of control.
With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don't want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.
So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.
Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.