this post was submitted on 17 Apr 2026
54 points (95.0% liked)

World News

55578 readers
1174 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses - though others caution further testing is needed to properly understand its capabilities.

Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.

"Certainly it is serious enough to warrant the attention of all the finance ministers," he said.

you are viewing a single comment's thread
view the rest of the comments
[–] FarceOfWill 2 points 1 day ago (1 children)

The question isnt how good the results are, its whether you can achieve the same quality for the money without an llm.

The stories of the bsd bug say they spent $20k on compute alone (and who knows if thats before or after VC subsidies). Then they had so many reports they need to pay some of the top experts to triage which ones were real.

And the result? no remote code execution, no data theft. A remote crash. Its a real bug that can cause problems but its not actually an exploit.

The sad thing is there really could be something new and useful in ai model security, people are seeing good results by automating the reproduction step, but the presentation of it as too dangerous to release and a massive change just sound like pure marketing.

Most likely its just too expensive to do this unless youre a vc funded op with its own compute and want a pr campaign to stop people thinking about how shit the source code you just accidentally released is.

[–] vzqq@lemmy.blahaj.zone 1 points 1 day ago

Obviously Antropic has no incentive to keep the token counts low. My understanding of their strategy is that they are betting on models getting better faster than what would justify the effort needed to squeeze more value per dollar out of them. Obviously I have no data to contradict them, but I would be surprised if that’s the case in the long term and for everyone.

My guess is that the costs can be reduced substantially, but that’s only going to happen once these tools get into the hands of your average security researcher.