this post was submitted on 15 Feb 2026
950 points (99.8% liked)

Fuck AI

5765 readers
2255 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

top 50 comments
sorted by: hot top controversial new old
[–] Strider@lemmy.world 13 points 1 hour ago

It doesn't matter. Management wants this and will not stop until they run against a wall at full speed. 🤷

[–] nonentity@sh.itjust.works 5 points 58 minutes ago

The output from tools infected with LLMs can intrinsically only ever be imprecise, and should never be trusted.

[–] sp3ctr4l@lemmy.dbzer0.com 29 points 3 hours ago* (last edited 3 hours ago)

As an unemployed data analyst / econometrician:

lol, rofl, perhaps even... lmao.

Nah though, its really fine, my quality of life is enormously superior barely surviving off of SSDI and not having to explain data analytics to thumb sucking morons (VPs, 90% of other team leads), and either fix or cover all their mistakes.

Yeah, sure, just have the AI do it, go nuts.

I am enjoying my unexpected early retirement.

[–] tover153@lemmy.world 16 points 3 hours ago

Before anything else: whether the specific story in the linked post is literally true doesn’t actually matter. The following observation about AI holds either way. If this example were wrong, ten others just like it would still make the same point.

What keeps jumping out at me in these AI threads is how consistently the conversation skips over the real constraint.

We keep hearing that AI will “increase productivity” or “accelerate thinking.” But in most large organizations, thinking is not the scarce resource. Permission to think is. Demand for thought is. The bottleneck was never how fast someone could draft an email or summarize a document. It was whether anyone actually wanted a careful answer in the first place.

A lot of companies mistook faster output for more value. They ran a pilot, saw emails go out quicker, reports get longer, slide decks look more polished, and assumed that meant something important had been solved. But scaling speed only helps if the organization needs more thinking. Most don’t. They already operate at the minimum level of reflection they’re willing to tolerate.

So what AI mostly does in practice is amplify performative cognition. It makes things look smarter without requiring anyone to be smarter. You get confident prose, plausible explanations, and lots of words where a short “yes,” “no,” or “we don’t know yet” would have been more honest and cheaper.

That’s why so many deployments feel disappointing once the novelty wears off. The technology didn’t fail. The assumption did. If an institution doesn’t value judgment, uncertainty, or dissent, no amount of machine assistance will conjure those qualities into existence. You can’t automate curiosity into a system that actively suppresses it.

Which leaves us with a technology in search of a problem that isn’t already constrained elsewhere. It’s very good at accelerating surfaces. It’s much less effective at deepening decisions, because depth was never in demand.

If you’re interested, I write more about this here: https://tover153.substack.com/

Not selling anything. Just thinking out loud, slowly, while that’s still allowed.

[–] Trainguyrom@reddthat.com 3 points 2 hours ago

How much do you want to bet they also rolled out bonsuses based on this bogus data? The one saving grace is they started using the new LLM tooling mid-Q4 so any quarterlies would at least be partially based on real data

[–] Snowclone@lemmy.world 11 points 3 hours ago

I hope they sue whoever sold it to them. it's not artificial intelligence, it's a machine learning chat bot. they may as well be running their company with a magic eight ball.

I was trying to figure out why the stock mark is so high.

[–] untorquer@lemmy.world 38 points 5 hours ago (2 children)

This would suggest the leadership positions aren't required for the function of the business.

[–] sp3ctr4l@lemmy.dbzer0.com 6 points 3 hours ago* (last edited 3 hours ago)

I have been saying for years now that the kind of work that LLMs are best suited for replacing and also would by far be their most cost effective use case from a business stand point is...

Well its the most expensive employees who basically just spend most of their time having meetings or writing emails about things they only understand at a very birds eye view level.

You know, C Suite, upper management.

[–] PapaStevesy@lemmy.world 17 points 4 hours ago

This has always been the case, in every industry.

[–] brucethemoose@lemmy.world 3 points 3 hours ago* (last edited 3 hours ago)

Why did mods remove the OP?

Can you literally not badmouth AI on Reddit?

[–] ladicius@lemmy.world 6 points 4 hours ago

Nice. Really, I like it when management is dumb as fuck. It's a world of never ending joy.

[–] FlashMobOfOne@lemmy.world 33 points 6 hours ago (3 children)

Jesus Christ, you have to have a human validate the data.

[–] BlameTheAntifa@lemmy.world 3 points 1 hour ago

But that would mean paying someone for work. The CEOs want to replace humans.

[–] jacksilver@lemmy.world 8 points 4 hours ago

LLMs can't really do math, so if there is any analysis being done, the numbers will typically be junk. Unless the LLM is writing the code to do the math, but then you have to validate the code.

[–] 474D@lemmy.world 30 points 6 hours ago (1 children)

Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"

[–] FlashMobOfOne@lemmy.world 23 points 6 hours ago (3 children)

And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.

Whoever implemented this agent without proper oversight needs to be fired.

[–] hector@lemmy.today 22 points 6 hours ago (1 children)

Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.

It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.

load more comments (1 replies)
load more comments (2 replies)
[–] excral@feddit.org 176 points 9 hours ago (10 children)

I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.

[–] glance@lemmy.world 8 points 4 hours ago

Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I'm the best case, it's because people expect the wrong answers as that's all they've been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.

[–] dkppunk@piefed.social 11 points 4 hours ago

AIs aren't trained to produce correct answers, but seemingly correct answers

I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.

[–] 0x0f@piefed.social 19 points 5 hours ago* (last edited 5 hours ago) (2 children)

My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.

[–] jj4211@lemmy.world 8 points 4 hours ago

The problem is that we've had a culture of people who don't know things very well control the purse strings relevant to those things.

So we have executives who don't know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more "executive" like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn't support their decision, just a "yes, and..." result agreeing with whatever dumbass request they thought would be correct and simple.

Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.

[–] resipsaloquitur@lemmy.world 7 points 3 hours ago

The problem is, every time you use it, you become more passive. More passive means less alert to problems.

Look at all the accidents involving "safety attendants" in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I'll sneak a peak at my phone. Well, haven't gotten into an accident in a month, I'll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!

[–] cecilkorik@piefed.ca 4 points 3 hours ago

They are designed to convince people. That's all they do. True, or false, real or fake, doesn't matter, as long as it's convincing. They're like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.

[–] pankuleczkapl@lemmy.dbzer0.com 30 points 6 hours ago (2 children)

Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.

[–] jj4211@lemmy.world 8 points 3 hours ago

I've been through the cycle of the AI companies repeatedly saying "now it's perfect" only admitting it's complete trash when they release the next iteration and claim "yeah it was broken, we admit, but now it's perfect" so many times now...

Problem being there's a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I'm just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a "oh, those are probably going to be fixed in 4.7 or 5 or whatever".

Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn't make sense. They use the same version number scheme after all, so expectations should be similar.

[–] amorpheus@lemmy.world 3 points 2 hours ago (1 children)

most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models

Both of those can be true.

I mean yeah, but they specifically mentioned its amazing performance in tasks requiring reasoning

load more comments (5 replies)
[–] BlameTheAntifa@lemmy.world 6 points 4 hours ago
[–] Lucidlethargy@sh.itjust.works -1 points 1 hour ago

This person that posted this, along with everyone else who knew they were using LLM's like this, is an incompetent idiot that should lose their job.

[–] wonderingwanderer@sopuli.xyz 32 points 7 hours ago

Dumbasses. Mmm, that's good schadenfreude.

load more comments
view more: next ›