237
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
it does show their general style of work, eg no checks of the source at all, complete ignorance of the capabilities of language models, and lots of pleas to not hack the user when they ask a question. with that leak i'm not surprised they think a model is "too dangerous". they could barely stop the old one.
Oh I completely agree with that, just the jump to "a flawed model leaked" is too far. There's already enough crap to mock, no need to make up additional stuff.