this post was submitted on 17 Apr 2026
54 points (95.0% liked)

World News

55578 readers
1174 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.

"Certainly it is serious enough to warrant the attention of all the finance ministers," he said.

you are viewing a single comment's thread
view the rest of the comments
[–] limonfiesta@lemmy.world 22 points 2 days ago* (last edited 2 days ago) (2 children)

"Capitalists dependent upon AI bubble, feign concern regarding latest snake oil release to help prop AI bubble"

FTFY

[–] EightBitBlood@lemmy.world 2 points 1 day ago

I COMPLETELY agree with you. Except in this one case it's possible the problem is that the billionaires made an AI that understands how billionaires exploit financial systems and fear others could do the same with Mythos leading to actual legislation that would close the loop holes they spent decades creating within international finance legislation.

[–] vzqq@lemmy.blahaj.zone 2 points 2 days ago* (last edited 2 days ago) (2 children)

I know it’s cool to be blasé about AI stuff, but if there’s an area where the hype is warranted it’s computer security research.

I don’t want to look at AI “art” or read an AI generated “book”, but the exploits derived from an AI-enabled process work just as well as the organic version. And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.

[–] eleijeep@piefed.social 16 points 2 days ago (1 children)

And you don’t need a warehouse full of Eastern European zoomers and junk food to get them.

Are you certain about that? Anthropic has a team of security engineers "validating" the LLM output, and then they have been passing on their "validated" outputs to third-party security researchers to "confirm" them.

Tellingly, they don't say how many false positives have to be filtered through in order to find the correct vulnerabilities with working exploits, but I imagine that if all those security researchers were tasked with auditing the same codebases, they would probably find the same (or more) vulnerabilities without the shotgun guessing of an LLM to guide them.

You need to remember that these claims are being made by a company that has enormous financial incentive to make everyone believe that this model is a huge breakthrough.

[–] vzqq@lemmy.blahaj.zone 4 points 2 days ago* (last edited 2 days ago) (2 children)

I have no inside knowledge on this particular work, but their previous work on the OSS-fuzz targets and on Firefox were all excellent quality bug reports.

Seriously. Look them up.

They were all reproducible ways to trigger faults in ASan builds. That’s by definition memory corruption. We can argue about whether all of them are exploitable, but a) they need to get fixed regardless b) we know that even tiny memory corruptions can often be leveraged into a compromise given enough effort.

[–] FarceOfWill 2 points 1 day ago (1 children)

The question isnt how good the results are, its whether you can achieve the same quality for the money without an llm.

The stories of the bsd bug say they spent $20k on compute alone (and who knows if thats before or after VC subsidies). Then they had so many reports they need to pay some of the top experts to triage which ones were real.

And the result? no remote code execution, no data theft. A remote crash. Its a real bug that can cause problems but its not actually an exploit.

The sad thing is there really could be something new and useful in ai model security, people are seeing good results by automating the reproduction step, but the presentation of it as too dangerous to release and a massive change just sound like pure marketing.

Most likely its just too expensive to do this unless youre a vc funded op with its own compute and want a pr campaign to stop people thinking about how shit the source code you just accidentally released is.

[–] vzqq@lemmy.blahaj.zone 1 points 1 day ago

Obviously Antropic has no incentive to keep the token counts low. My understanding of their strategy is that they are betting on models getting better faster than what would justify the effort needed to squeeze more value per dollar out of them. Obviously I have no data to contradict them, but I would be surprised if that’s the case in the long term and for everyone.

My guess is that the costs can be reduced substantially, but that’s only going to happen once these tools get into the hands of your average security researcher.

[–] eleijeep@piefed.social 5 points 2 days ago

Yes of course they were. Professional security researchers tend to produce professional, high quality reports.

[–] limonfiesta@lemmy.world 15 points 2 days ago* (last edited 2 days ago)

My comment was not generalized AI snark, it was specific to Claude Mythos.

At least according to Ed Zitron, the reason Mythos is not being released to general public is simply that it's too fucking expensive to run.

And all of the vulnerabilities it found, were found by other less expensive models already.

This scare tactic PR strategy is pure marketing hype for Anthropic, that's it.

Again, according to Ed.

Maybe time will show that I was wrong to trust Ed's reporting more than AI "tech leaders", but until that time comes, I know who I lean towards believing more.