236
Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
And you believe Anthropic?
Well, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.
Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.
The meme still applies to people asking the AI to tell them what they wanna hear, and delusional people spiraling with sycophantic AI.
But I believe Anthropic when they say their models are not working as intended and posing security risks.
Try clicking the link and reading the article this time
I wasn’t wrong in this reply. I was asked about believing Anthropic.
Are you saying they are lying? Why should I disbelieve Anthropic?
Your reasoning was (paraphrased, so hopefully I understood you correctly) "why would they lie about the model disobeying instructions because that looks bad for them"
But when you actually read the article, they had specifically prompted the model to do the things it did.
Also Anthropic has a patterned history of greatly exaggerating and outright lying.