this post was submitted on 17 Apr 2026
54 points (95.0% liked)

World News

55578 readers
1136 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.

"Certainly it is serious enough to warrant the attention of all the finance ministers," he said.

top 18 comments
sorted by: hot top controversial new old
[–] benjirenji@slrpnk.net 36 points 2 days ago (6 children)

I'm not sure I get the concern. If there are vulnerabilities they have probably been sold to NSA, other state hackers and black hats already. Mythos would help close them for everyone.

Sure, a bad actor could use it to break in, but Mythos is not some secret hacking tool, it's an expensive LLM you can run against your own code and system giving you the upper hand.

Anthropic is actually acting responsibly by contacting maintainers and platforms with bugs and the possibility to analyze their systems before it's released to the wider public. And if it's all hype then this is a money grabbing operation to finally make good money off of LLMs. That concern however doesn't seem to be shared by the financiers.

[–] vzqq@lemmy.blahaj.zone 18 points 2 days ago* (last edited 2 days ago)

I’m worried about the tons of barely maintained software run by your average company. Most commercial software is made by relatively small outfits and is drowning technical debt. The only thing saving their customers is the effort of picking through it.

But now any loser with a decompiler and a $100 Claude sub can ruin a whole lot of people’s day.

Things will get better, but the near term is pretty fucked.

[–] magnue@lemmy.world 10 points 2 days ago

I bought a game server once that was hosted on a VPS and I didn't bother doing much security setup for the first 12 hours (fail2ban etc). I was SSH-ing into it so the address was kind of 'open', but not listed anywhere.

Got over 300 failed SSH attempts in 12 hrs before I set up fail2ban and set up keys. Just massive botnets scouring addresses. (The game server never took off so it's actually now being used as a honeypot named like a payment node so I can report a bunch).

The worry is more the automation. Someone could cast an extremely wide net and find vulnerabilities that people didn't even know could be vulnerabilities. If you could run 1000 agents each one with the ability of a network/SE expert, you can steal a lot of things very quickly.

[–] CombatWombat@feddit.online 6 points 1 day ago

The thing to be concerned about is that LLM vendors have figured out a way to write a system card that leads to regulatory capture in under a week. That's the only innovation -- Mythos doesn't actually find software vulnerabilities considerably better than other LLMs.

[–] disorderly@lemmy.world 5 points 2 days ago (1 children)

I don't have it handy, but I recommend reading Anthropic's report about mythos and security. They state that in the long run, models which can iteratively build an attack against a perceived vulnerability will be a major win for defenders, but in the short term, they present an advantage to attackers since they basically expose oodles of new zero days.

[–] ThirdConsul@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

So far any technical blog that Antrophic made regarding their new capabilities was a marketing lie (e.g. agents clean room implementing c compiler without human intervention - turns out the only true part of that was that the agents did burn tokens, but it was neither clean room, nor without human intervention, it didn't compile and it needed to use the actual compiler to copy paste working code [sic!]). I am not convinced the Mythos one is different.

[–] LibertyLizard@slrpnk.net 3 points 2 days ago

Maybe the concern is less scrupulous actors will soon be able to develop similar models?

[–] theherk@lemmy.world 3 points 2 days ago

Yep; same as it ever was. The arms race continues. At least until Butlerian Jihad arrives.

[–] limonfiesta@lemmy.world 22 points 2 days ago* (last edited 2 days ago) (2 children)

"Capitalists dependent upon AI bubble, feign concern regarding latest snake oil release to help prop AI bubble"

FTFY

[–] EightBitBlood@lemmy.world 2 points 1 day ago

I COMPLETELY agree with you. Except in this one case it's possible the problem is that the billionaires made an AI that understands how billionaires exploit financial systems and fear others could do the same with Mythos leading to actual legislation that would close the loop holes they spent decades creating within international finance legislation.

[–] vzqq@lemmy.blahaj.zone 2 points 2 days ago* (last edited 2 days ago) (2 children)

I know it’s cool to be blasé about AI stuff, but if there’s an area where the hype is warranted it’s computer security research.

I don’t want to look at AI “art” or read an AI generated “book”, but the exploits derived from an AI-enabled process work just as well as the organic version. And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.

[–] eleijeep@piefed.social 16 points 2 days ago (1 children)

And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.

Are you certain about that? Anthropic has a team of security engineers "validating" the LLM output, and then they have been passing on their "validated" outputs to third-party security researchers to "confirm" them.

Tellingly, they don't say how many false positives have to be filtered through in order to find the correct vulnerabilities with working exploits, but I imagine that if all those security researchers were tasked with auditing the same codebases, they would probably find the same (or more) vulnerabilities without the shotgun guessing of an LLM to guide them.

You need to remember that these claims are being made by a company that has enormous financial incentive to make everyone believe that this model is a huge breakthrough.

[–] vzqq@lemmy.blahaj.zone 4 points 2 days ago* (last edited 2 days ago) (2 children)

I have no inside knowledge on this particular work, but their previous work on the OSS-fuzz targets and on Firefox were all excellent quality bug reports.

Seriously. Look them up.

They were all reproducible ways to trigger faults in ASan builds. That’s by definition memory corruption. We can argue about whether all of them are exploitable, but a) they need to get fixed regardless b) we know that even tiny memory corruptions can often be leveraged into a compromise given enough effort.

[–] FarceOfWill 2 points 1 day ago (1 children)

The question isnt how good the results are, its whether you can achieve the same quality for the money without an llm.

The stories of the bsd bug say they spent $20k on compute alone (and who knows if thats before or after VC subsidies). Then they had so many reports they need to pay some of the top experts to triage which ones were real.

And the result? no remote code execution, no data theft. A remote crash. Its a real bug that can cause problems but its not actually an exploit.

The sad thing is there really could be something new and useful in ai model security, people are seeing good results by automating the reproduction step, but the presentation of it as too dangerous to release and a massive change just sound like pure marketing.

Most likely its just too expensive to do this unless youre a vc funded op with its own compute and want a pr campaign to stop people thinking about how shit the source code you just accidentally released is.

[–] vzqq@lemmy.blahaj.zone 1 points 1 day ago

Obviously Antropic has no incentive to keep the token counts low. My understanding of their strategy is that they are betting on models getting better faster than what would justify the effort needed to squeeze more value per dollar out of them. Obviously I have no data to contradict them, but I would be surprised if that’s the case in the long term and for everyone.

My guess is that the costs can be reduced substantially, but that’s only going to happen once these tools get into the hands of your average security researcher.

[–] eleijeep@piefed.social 5 points 2 days ago

Yes of course they were. Professional security researchers tend to produce professional, high quality reports.

[–] limonfiesta@lemmy.world 15 points 2 days ago* (last edited 2 days ago)

My comment was not generalized AI snark, it was specific to Claude Mythos.

At least according to Ed Zitron, the reason Mythos is not being released to general public is simply that it's too fucking expensive to run.

And all of the vulnerabilities it found, were found by other less expensive models already.

This scare tactic PR strategy is pure marketing hype for Anthropic, that's it.

Again, according to Ed.

Maybe time will show that I was wrong to trust Ed's reporting more than AI "tech leaders", but until that time comes, I know who I lean towards believing more.

[–] givesomefucks@lemmy.world 1 points 2 days ago

"The difference is that the Strait of Hormuz - we know where it is and we know how large it is... the issue that we're facing with Anthropic is that it's the unknown, unknown."

Someone get Donald Rumsfield his royalty check...

I can't believe people are still using that shit.

Anyways,

AI companies are saying it's a huge danger but are ever so graciously willing to sell it to existing corps to fix the vulnerabilies it can supposedly circumvent...

Before it circumvents them...

Even tho the AI companies are also saying it will be able to circumvent anything regardless.

And even if they tell people it can make something it cant also break (impossible) in 6 months they'll repeat the process

A never ending cycle where they sell the shiny new model and corps pay for it because eventually hackers will have it too when it's publicly released

And endless cat and mouse game where AI constantly accumulates more and more wealth.