this post was submitted on 07 Aug 2025
317 points (98.8% liked)

Fuck AI

3721 readers
1494 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] danc4498@lemmy.world 22 points 6 days ago (2 children)

The problem is that AI is very convincing. And it’s right more than it’s wrong. So people see the answers that feel right.

[–] user224@lemmy.sdf.org 5 points 6 days ago

Honestly, a few times I forgot what I was talking to. Like, actually arguing because it was simply responding with BS, blaming on "user misunderstanding", etc. until I just had a moment like Zimsky talking to voice recorder in The Core (sauce: https://www.youtube.com/watch?v=axlO-SOacXU)

[–] ZDL@lazysoci.al 1 points 6 days ago (2 children)

If you think LLMbeciles are right more often than wrong then you're either profoundly ignorant or profoundly inattentive.

[–] danc4498@lemmy.world 2 points 6 days ago (1 children)

Are you an AI bot? Or have you literally never used chat gpt? It’s accurate way more than 50% of the time.

[–] somethingsnappy@lemmy.world 2 points 6 days ago (1 children)

Ah, so maybe like advice from a 69% right D+ student?

[–] danc4498@lemmy.world 3 points 6 days ago

A 69% D+ student that writes VERY convincingly. Keep in mind, we live in a world where people buy into pseudoscience and bullshit conspiracy theories because they are convincing. I think it’s just human nature.

[–] shalafi@lemmy.world 1 points 5 days ago

I've found ChatGPT to almost never be wrong, can't think of an example ATM. Having said that, I have a sense for what it can and can't do, what sort of inputs will output a solid answer.

Where it goes hilariously sideways is if you talk to it like a person and keep following up. Hell no. You ask a question that can be answered objectively and stop.

No way the output went straight to, "Sure! Bromine's safe to eat." Either he asked a loaded question to get the answer he wanted or this came after some back and forth.