this post was submitted on 01 Apr 2026
19 points (88.0% liked)

Fuck AI

6558 readers
1388 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] Denjin@feddit.uk 1 points 39 minutes ago

You know you done fucked up when even the Saudi Royal Family are calling you out.

[–] Voroxpete@sh.itjust.works 1 points 40 minutes ago

Obviously I have concerns - as I think anyone should - about the source of this reporting, but it is very definitely worth a read. All of the claims made appear to be well sourced (I've had no issue independently verifying those that I've checked so far), and the author's conclusions are well founded and consistent with existing research on these subjects.

There are definitely some assumptions being made when it comes to the exact degree to which AI was responsible for the poor decision making going into the conflict. This paragraph -

The evidence increasingly points to a conclusion more alarming than mere failure: the AI did not passively reflect flawed human judgment — it actively reinforced it. By generating fabricated confidence levels, inflating success probabilities, and systematically suppressing risk factors, the systems convinced planners that a swift, decisive victory was not just possible but near-certain. The gap between expectation and reality was not an accident. It was manufactured by machines optimised to tell powerful people what they wanted to hear.

  • is ultimately a hypothesis only, not a provable fact (and to be clear, the author is not making any explicit claim to fact here). We're dealing with a situation where we simply cannot know exactly how these decisions were made, and probably won't know for a very long time. But as a hypothesis it's sufficiently sound that I think we have to at least consider it plausible.

My only real objection would be to how the author frames their conclusions.

Gulf defence planners are already drawing their own conclusions. The Saudi military buildup, the diversification of defence partnerships beyond Washington, and the quiet expansion of diplomatic channels with non-Western powers all reflect a recognition that the era of unquestioning reliance on American strategic judgment may be ending — not because the United States lacks capability, but because the AI tools it now relies upon actively convinced planners that a swift, decisive victory was near-certain.

I think that the language here and in the subsequent paragraph leans too heavily on throwing all the operational failures at the feet of an over-reliance on unproven tools, without any consideration for how the clear ideological impetus and staggering incompetence of the current administration were major factors. This doesn't undermine any of what the author is saying about the danger of these tools, it just runs the risk of eliding the responsibility of the incompetent fascists who were ultimately responsible for the decisions made using those tools.

[–] inari@piefed.zip 2 points 1 hour ago (1 children)

For a second I thought AI psychosis was an extreme form of AI hallucination, but it seems to not be the case