this post was submitted on 15 Feb 2026
866 points (99.9% liked)

Fuck AI

5765 readers
2037 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

(page 2) 48 comments
sorted by: hot top controversial new old
[–] privatepirate@lemmy.zip 9 points 5 hours ago (1 children)

What dumbass decided to implement an experimental technology and not test it for 5 minutes to make sure it's accurate before giving it to the whole company and telling them to rely upon it?

load more comments (1 replies)
[–] sukhmel@programming.dev 47 points 7 hours ago (2 children)

Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all

[–] PhoenixDog@lemmy.world 24 points 5 hours ago

I don't need AI to fabricate data. I can be stupid on my own, thank you.

[–] ivanafterall@lemmy.world 28 points 7 hours ago (2 children)

I feel like no analytics is probably better than decisions based on made-up analytics.

[–] jj4211@lemmy.world 9 points 5 hours ago (1 children)

Yep, without analytics you at least are likely going on anecdotal feel for things which while woefully incomplete is at least probably based on actual indirect experience, like number of customers you've spoken with, how happy they have seemed, how employees have been feeling, etc.

Could be horribly off the mark without actual study of the data, but it is at least roughly directed by reality rather than just random narrative made by a word generator that has nothing to do with your company at all.

[–] sukhmel@programming.dev 4 points 4 hours ago (2 children)

I'm not sure, because you see I'm not C-level by far, but I feel the decisions in such cases are made based on imaginary version of clients, and what tops feel the clients want (that is what they think they would want if they were clients)

And they may guess right or wrong, though I agree that they may be more likely to guess right than an LLM, being humans and all

load more comments (2 replies)
load more comments (1 replies)
[–] cronenthal@discuss.tchncs.de 68 points 9 hours ago (7 children)

I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.

[–] rozodru@piefed.world 25 points 6 hours ago (1 children)

As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.

AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.

you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.

So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.

Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.

[–] jj4211@lemmy.world 19 points 5 hours ago* (last edited 5 hours ago) (2 children)

Probably the skepticism is around someone actually trusting the LLM this hard rather than the LLM doing it this badly. To that I will add that based on my experience with LLM enthusiasts, I believe that too.

I have talked to multiple people who recognize the hallucination problem, but think they have solved it because they are good "prompt engineers". They always include a sentence like "Do not hallucinate" and thinks that works.

The gaslighting from the LLM companies is really bad.

load more comments (2 replies)
[–] Quacksalber@sh.itjust.works 53 points 8 hours ago (1 children)
[–] Rothe@piefed.social 1 points 2 hours ago

It is a thing that is happening, but the OP instance probably didn't, since it is just a reddit post.

[–] HaraldvonBlauzahn@feddit.org 24 points 7 hours ago

Use of AI in companies would not save any time if you were checking each result.

[–] fizzle@quokk.au 9 points 7 hours ago (2 children)

Yeah.

Kinda surprised there isn't already a term for submitting / presenting AI slop without reviewing and confirming.

[–] whotookkarl@lemmy.dbzer0.com 27 points 7 hours ago

Negligence and fraud come to mind

[–] hitmyspot@aussie.zone 7 points 6 hours ago

Slop flop seems like it would work. He’s flopped the slop. That slop was flopped out without checking.

load more comments (2 replies)
[–] AA5B@lemmy.world 0 points 2 hours ago* (last edited 2 hours ago) (2 children)

Most ai stuff I use include a list of relevant sources next to the results. Do you not ever click in?

For me it’s critical to confirm, for example, the detail of that vendor api I want to use. However even then any hallucinations would mostly waste my time since if it doesn’t work it won’t get released

You’re telling me that people make actually business decisions without ever checking sources?

load more comments (2 replies)
[–] stoy@lemmy.zip 80 points 9 hours ago (3 children)

I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart

[–] jj4211@lemmy.world 21 points 5 hours ago (1 children)

They haven't drifted apart, they were never close in the first place. People have been increasingly confident in the models because they've increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.

load more comments (1 replies)
[–] Kirp123@lemmy.world 78 points 9 hours ago (1 children)

AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.

load more comments (1 replies)
[–] Spezi@feddit.org 34 points 9 hours ago (1 children)

And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.

[–] whyNotSquirrel@sh.itjust.works 34 points 9 hours ago (3 children)

while also blaming employees

[–] resipsaloquitur@lemmy.world 1 points 2 hours ago

What employees?

[–] Junkers_Klunker@feddit.dk 19 points 8 hours ago

Of course, it is the employees who used it. /s

load more comments (1 replies)
[–] sundray@lemmus.org 60 points 9 hours ago
[–] MedicPigBabySaver@lemmy.world 1 points 3 hours ago

Fuck Reddit and Fuck Spez.

[–] tangeli@piefed.social 26 points 9 hours ago (1 children)

But don't worry, when it comes to life or death issues, AI is the best way to help

[–] FinjaminPoach@lemmy.world 22 points 9 hours ago (2 children)

Haha, "chat, how do I stop the patients nose from bleeding"

"Cut his leg off."

"Well, you're the medicAI. Nurse, fetch the bonesaw"

[–] I_Jedi@lemmy.today 12 points 8 hours ago (1 children)

"Hello doctor."

"Hello doctor."

"Hello doctor."

"I don't believe his head is medically necessary."

"We should remove his head."

"I concur."

"I concur."

"We should then use his head as a soccer ball."

"Yes."

"For medical reasons, of course."

"That sounds fun."

"Off with his head."

Source

[–] SuperNovaStar@lemmy.blahaj.zone 1 points 6 hours ago

That was great, thanks for sharing!

[–] Kolanaki@pawb.social 15 points 9 hours ago (1 children)

"Drain all their blood" would technically stop their nose bleed.

[–] FinjaminPoach@lemmy.world 3 points 9 hours ago* (last edited 9 hours ago) (1 children)

Yeah and from the AI's point of view you've made a profit of one leg without spending any resources

[–] madejackson@lemmy.world 8 points 9 hours ago* (last edited 9 hours ago) (1 children)
[–] FinjaminPoach@lemmy.world 2 points 6 hours ago

Okay so "by the AI's calculations"

[–] BroBot9000@lemmy.world 6 points 9 hours ago

Bwahahahahahahha 😂

[–] MoonManKipper@lemmy.world 4 points 9 hours ago (6 children)

If true they’re all idiots, but I don’t believe the story anyway. All the data question answering LLMs I’ve seen use the LLM to write SQL queries on your databases and then wrap the output in a summary. So the summary is easy to check and very unlikely to be significantly wrong. AI/ML/statistics and code is a tool, use it for what it’s good for, don’t use it for what it’s not, treat hype with skepticism

[–] skisnow@lemmy.ca 9 points 6 hours ago

Writing a syntactically correct SQL statement is not the same as doing accurate data analytics.

[–] jj4211@lemmy.world 6 points 5 hours ago

I'm on the fence, but will say that if, for whatever reason, it was never actually connected to the data or the connection had some flaw, I could totally believe it would just fabricate a report that looks consistent with what the request asked for. Maybe it failed to ever convey that an error occurred. Maybe it conveyed the lack of data and the user thought he could just tell the AI to fix the problem without trying to understand it himself and triggered it to generate a narrative consistent with fixing it without actually being able to fix it.

Sure if you do a sanity check it should fall apart, but that assumes they bother. Some people have crazy confidence in LLM and didn't even check.

[–] drosophila@lemmy.blahaj.zone 11 points 6 hours ago* (last edited 6 hours ago)

I am reminded of this story:

https://retractionwatch.com/2024/02/05/no-data-no-problem-undisclosed-tinkering-in-excel-behind-economics-paper/

Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out.

Of course that guy didn't need fancy autofill to act like an idiot, he used good old fashion autofill.

[–] mayabuttreeks@lemmy.ca 30 points 8 hours ago (2 children)

Honestly, I was leaning toward "funny but probably fake" myself until I checked out OP's post history, which mentions "startups" and namedrops a few SaaS tools used heavily in marketing. If you've worked with marketers (or a fair few startup bros, honestly), you'll know this isn't beyond the bounds of reason for some of them 😂

[–] deadbeef79000@lemmy.nz 12 points 8 hours ago

If you’ve worked with marketers

Oh boy. Yeah. SNAFU City.

Marketing just hallucinate their numbers anyway.

[–] MoonManKipper@lemmy.world 4 points 6 hours ago

I did leave myself a “could be idiots” get out clause

[–] Blackmist@feddit.uk 8 points 8 hours ago (2 children)

The problem is you've got people using the tools that don't understand the output or the method to get there.

Take the Excel copilot function. You need to pass in a range of cells for the slop prompt to work on, but it's an optional parameter. If you don't pass that in, it returns results anyway. They're just complete bollocks.

[–] TrippyHippyDan@lemmy.world 5 points 4 hours ago

It's even worse than that. The ones that should understand the tools decide that the ease is good enough and just become AI brain rot.

I've watched co-workers go from good co-workers to people I can't trust anything from because I know they just slapped at an AI and didn't check it.

What's worse is, when you come to them as an engineer and tell them they're wrong, you have to prove to them the AI is wrong, not they have to prove to you the AI is right.

Moreover, when you refer to documentation, they can't be bothered and say the AI didn't say that, so it must be wrong.

[–] MoonManKipper@lemmy.world -1 points 6 hours ago

At least it’ll self correct in a couple of years - use a tool, look like an idiot, stop using tool

load more comments (1 replies)
load more comments
view more: ‹ prev next ›