this post was submitted on 13 Aug 2025
356 points (95.9% liked)

Fuck AI

3749 readers
445 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99

you are viewing a single comment's thread
view the rest of the comments
[–] MountingSuspicion@reddthat.com 25 points 2 days ago* (last edited 2 days ago) (2 children)

That's not a valid reason to lie. If it does not have the information it should state as much. This post underscores one of the biggest issues with AI. It will confidently say whatever "statistically plausible" thing regardless of the actual truth.

Edit: in case there is any confusion, by "should" I mean in an ideal scenario where ai is able to be used the way people think they can currently use it. I'm aware that it's not really how ai works hence the remainder of the comment making note of the statistically plausible bit. AI makes factual errors on things that could arguably be answered with its current dataset (how many bs in blueberry etc) and this is not an issue with the dataset, it's a side effect with the way LLMs work. They are not reasoning machines. They are fancy algorithms. This makes them impractical for use in several areas where they're already being deployed and that's a problem.

[–] npdean@lemmy.today 9 points 2 days ago (1 children)

I agree. The problem is it does not know that it is lying. As a whole, I would say it is our mistake to use it

[–] leftzero@lemmy.dbzer0.com 1 points 2 days ago (1 children)

That's victim blaming.

The fault is on the scammers selling the faulty product, not on the users who fall for the scam.

[–] npdean@lemmy.today 0 points 2 days ago

No one is a victim. You are blowing it out of proportion.

AI is not a scam. People just don’t understand where to use which technology. People using LLMs for financial advice or getting accurate data are not well informed of how AI works. Marketing teams take advantage of this and inflate stock prices but nowhere is any user duped of their money.

[–] ddplf@szmer.info 6 points 2 days ago* (last edited 2 days ago) (2 children)

You don't understand how AI works under the hood. It can't tell you it's lying, because it doesn't know the concept lying. In fact - it doesn't know ANYTHING, literally. It's not thinking, it's predicting. It's speculating what the viable answer would look like based on his dataset.

You don't actually get real answers to your questions - you only get a text that the AI determined may seem most fitting to your prompt.

I understand how my comment was unclear, but I was attempting to underscore the fact that it cannot determine the difference. That's why I included the whole statistically plausible bit. My point is that AI as it currently functions is fundamentally flawed for several use cases because it cannot operate as it "should". It just says things with no ability to determine the veracity.

The first portion of my comment was addressing the suggestion that there was a "reason" to lie. My point is there is no good justification to providing factually incorrect answers, and currently there is no way to stop AI from doing so. Hope that clears things up.

[–] leftzero@lemmy.dbzer0.com 1 points 2 days ago

Correct.

Therefore selling it as something capable of reliably correctly answering questions is a criminal scam.