this post was submitted on 15 Feb 2026
606 points (100.0% liked)

Fuck AI

5765 readers
1792 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

link to archived Reddit thread; original post removed/deleted

you are viewing a single comment's thread
view the rest of the comments
[–] MoonManKipper@lemmy.world 3 points 6 hours ago (5 children)

If true they’re all idiots, but I don’t believe the story anyway. All the data question answering LLMs I’ve seen use the LLM to write SQL queries on your databases and then wrap the output in a summary. So the summary is easy to check and very unlikely to be significantly wrong. AI/ML/statistics and code is a tool, use it for what it’s good for, don’t use it for what it’s not, treat hype with skepticism

[–] jj4211@lemmy.world 5 points 2 hours ago

I'm on the fence, but will say that if, for whatever reason, it was never actually connected to the data or the connection had some flaw, I could totally believe it would just fabricate a report that looks consistent with what the request asked for. Maybe it failed to ever convey that an error occurred. Maybe it conveyed the lack of data and the user thought he could just tell the AI to fix the problem without trying to understand it himself and triggered it to generate a narrative consistent with fixing it without actually being able to fix it.

Sure if you do a sanity check it should fall apart, but that assumes they bother. Some people have crazy confidence in LLM and didn't even check.

[–] skisnow@lemmy.ca 7 points 3 hours ago

Writing a syntactically correct SQL statement is not the same as doing accurate data analytics.

[–] drosophila@lemmy.blahaj.zone 10 points 4 hours ago* (last edited 4 hours ago)

I am reminded of this story:

https://retractionwatch.com/2024/02/05/no-data-no-problem-undisclosed-tinkering-in-excel-behind-economics-paper/

Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out.

Of course that guy didn't need fancy autofill to act like an idiot, he used good old fashion autofill.

[–] mayabuttreeks@lemmy.ca 28 points 5 hours ago (2 children)

Honestly, I was leaning toward "funny but probably fake" myself until I checked out OP's post history, which mentions "startups" and namedrops a few SaaS tools used heavily in marketing. If you've worked with marketers (or a fair few startup bros, honestly), you'll know this isn't beyond the bounds of reason for some of them 😂

[–] MoonManKipper@lemmy.world 3 points 3 hours ago

I did leave myself a “could be idiots” get out clause

[–] deadbeef79000@lemmy.nz 12 points 5 hours ago

If you’ve worked with marketers

Oh boy. Yeah. SNAFU City.

Marketing just hallucinate their numbers anyway.

[–] Blackmist@feddit.uk 7 points 5 hours ago (2 children)

The problem is you've got people using the tools that don't understand the output or the method to get there.

Take the Excel copilot function. You need to pass in a range of cells for the slop prompt to work on, but it's an optional parameter. If you don't pass that in, it returns results anyway. They're just complete bollocks.

[–] TrippyHippyDan@lemmy.world 5 points 2 hours ago

It's even worse than that. The ones that should understand the tools decide that the ease is good enough and just become AI brain rot.

I've watched co-workers go from good co-workers to people I can't trust anything from because I know they just slapped at an AI and didn't check it.

What's worse is, when you come to them as an engineer and tell them they're wrong, you have to prove to them the AI is wrong, not they have to prove to you the AI is right.

Moreover, when you refer to documentation, they can't be bothered and say the AI didn't say that, so it must be wrong.

[–] MoonManKipper@lemmy.world -1 points 3 hours ago

At least it’ll self correct in a couple of years - use a tool, look like an idiot, stop using tool