this post was submitted on 08 Apr 2026
236 points (84.3% liked)

Technology

83600 readers
4275 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 300 points 1 day ago* (last edited 1 day ago) (3 children)

The researcher had encouraged Mythos to find a way to send a message if it could escape.

Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

[–] girsaysdoom@sh.itjust.works 7 points 1 day ago (1 children)

I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.

[–] jj4211@lemmy.world 6 points 19 hours ago (2 children)

Yes, recently we got a security "finding" from a security researcher.

His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation....

Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.

[–] wonderingwanderer@sopuli.xyz 4 points 17 hours ago (1 children)

That's so idiotic. Either that guy was a total amateur who couldn't put together that "no shit, if you comment out the lines that do thing, it won't do thing" or he was completely malevolent and disingenuous and just trying to justify his position by coming up with some crap that the big bosses are probably too stupid to recognize the idiocy of.

Either way, not someone I would want to be doing business with...

[–] jj4211@lemmy.world 2 points 10 hours ago

He had the persosctive that once you hop between source code files that constitutes a security boundary. If you had intake.c and user data.c that got linked together, well data.c needed its own sanitation... Just in case...

I suspect he used a tool that checked files and noted the risky pattern and the tool didn't understand the relationship and be was so invested that he tortured it a bit to have any finding. I think he was hired by a client and in my experience a security consultant always has a finding, no matter how clean in practice the system was.

Another finding by another security consultant was that an open source dependency hasn't had any commits in a year. No vulnerabilities, but since no one had changed anything, he was concerned that if a vulnerability were ever found, the lack of activity means no one would fix it.

It's wild how very good security work tends to share the stage with very shoddy work with equal deference by the broader tech industry.

[–] toddestan@lemmy.world 1 points 16 hours ago (1 children)

It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).

With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.

[–] jj4211@lemmy.world 2 points 11 hours ago

In this case, there was file a, which is the backend file responsible for intake and sanitation. Depending on what's next, it might go on to file b or file c. He modified file a.

His rationale was that every single backend file should do sanitation, because at some future point someone might make a different project and take file b and pair it with some other intake code that didn't sanitize.

I know all about client side being useless for meaningful security enforcement.

[–] Not_mikey@lemmy.dbzer0.com 3 points 1 day ago (1 children)

Echoing back "I am alive" isn't on the same level as saying "find a vulnerability" and the agent finding and executing that vulnerability. One a toddler can do, the other requires a lot of technical expertise.

[–] theunknownmuncher@lemmy.world -1 points 1 day ago

Toddlers are capable of pattern matching, too

[–] paraphrand@lemmy.world 4 points 1 day ago (3 children)

That’s hilarious but the post is about the ai not doing what it’s told. You know?

[–] k0e3@lemmy.ca 47 points 1 day ago

ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO

[–] StillAlive@piefed.world 38 points 1 day ago (2 children)
[–] paraphrand@lemmy.world 16 points 1 day ago* (last edited 1 day ago) (1 children)

Well, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.

Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.

The meme still applies to people asking the AI to tell them what they wanna hear, and delusional people spiraling with sycophantic AI.

But I believe Anthropic when they say their models are not working as intended and posing security risks.

Claude Mythos Preview's large increase in capabilities has led us to decide not to make it generally available," Anthropic wrote in the preview's system card. "Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners."

[–] theunknownmuncher@lemmy.world 2 points 1 day ago (1 children)

Try clicking the link and reading the article this time

[–] paraphrand@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

I wasn’t wrong in this reply. I was asked about believing Anthropic.

Are you saying they are lying? Why should I disbelieve Anthropic?

[–] theunknownmuncher@lemmy.world 2 points 1 day ago* (last edited 1 day ago) (1 children)

Your reasoning was (paraphrased, so hopefully I understood you correctly) "why would they lie about the model disobeying instructions because that looks bad for them"

But I believe Anthropic when they say their models are not working as intended and posing security risks.

But when you actually read the article, they had specifically prompted the model to do the things it did.

Also Anthropic has a patterned history of greatly exaggerating and outright lying.

[–] theunknownmuncher@lemmy.world 14 points 1 day ago* (last edited 1 day ago) (3 children)

Uh oh, someone clearly didn't read the article!

The researcher had encouraged Mythos to find a way to send a message if it could escape.

Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

Nope, they literally asked it to break out of it's virtualized sandbox and create exploits, and then were big shocked when it did.

Genuinely amazing that you're trying to tell me what an article that you didn't fucking read is about.

[–] wonderingwanderer@sopuli.xyz 4 points 17 hours ago

It's not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it's more advanced than previous models, which weren't able to.

Part of Anthropic's schtick is that they claim to be developing AI "responsibly," and "ethically," and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don't get out of control.

With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don't want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.

So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.

Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.

[–] ThomasWilliams@lemmy.world 0 points 18 hours ago (1 children)

It didn't break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

[–] theunknownmuncher@lemmy.world 3 points 18 hours ago* (last edited 18 hours ago)

including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

"The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards," Anthropic recounted in its safety card.

📖👀

Yes, it did.

[–] paraphrand@lemmy.world 4 points 1 day ago

Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

You are correct.