this post was submitted on 16 Feb 2026
359 points (97.6% liked)

Fuck AI

5974 readers
1172 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

I was in a group chat where the bro copied and pasted my question into ChatGPT, took a screenshot, and pasted it into the chat.

As a joke, I said that Gemini disagreed with the answer. He asked what it said.

I made up an answer and then said I did another fact check with Claude, and ran it through "blast processing" servers to "fact check with a million sources".

He replies that ChatGPT 5.1 is getting unreliable even at the $200 a month model and considering switching to a smarter agent.

Guys - it's not funny anymore.

you are viewing a single comment's thread
view the rest of the comments
[–] Not_mikey@lemmy.dbzer0.com 1 points 3 days ago (1 children)

Why should the one receiving a generated answer when asking a person spend more effort validating it

Because they're the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP? If they have no obligation then an AI answer, if it's right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don't trust it, they don't have to validate it.

Unsolicitedly answering someone with LLM generated blubbering is a sign of disrespect.

Fair enough, but etiquette on AI is new and not universal. We don't know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it's fine and reinforcing the behavior which will lead to that person continuing to do it.

It'd be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it's fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

Meanwhile lying to someone to fuck with them and making stuff up is universally known to be disrespectful. OP was intentionally disrespectful, the other person may not have been.

The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn't getting actual real world data when they are researching on the Internet.

[–] cmhe@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (1 children)

Because they’re the ones asking the question. You can flip this around and ask why should the person put time into researching and answering the question for OP?

Because they where asked. They don't need to spend time, if they don't want to. They can just say: "I don't know." or if they want to be more helpful "I don't know, but I can ask a LLM, if you'd like". If someone answers they should at least have something to say.

If they have no obligation then an AI answer, if it’s right, is better than no answer as it gives OP some leads to research into. OP can always just ignore the AI answer if they don’t trust it, they don’t have to validate it.

No, its not better. They where asked, not the LLM.

Do you always think like that? Like... If some newspaper start printing LLM generated slop news articles. Would you say, "It is the responsibility of the reader to research if anything in it is true or not?" No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

People that assert something should be able to defend it. Otherwise we are in a post-fact/post-truth era.

Fair enough, but etiquette on AI is new and not universal. We don’t know that the person meant to disrespect OP. The mature thing to do would be for OP to say that they felt disrespected by that response instead of pretending like it’s fine and reinforcing the behavior which will lead to that person continuing to do it.

That really depends on the history and relationship between them. We don't know that, so I will not assume anything. They could have had previous talks where they stated that they don't like LLM generated replies. But this is beside the point.

All I assumed is that they likely have not agreed to an LLM reply beforehand.

It’d be like if someone used a new / obscure slur, the right thing to do is inform them how it is offensive, not pretend it’s fine and start using it in the conversation to fuck with them . If they keep using it after you inform them, then yeah fuck them, but make sure your not normalizing it yourself too.

No, this is a false comparison. If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

The rest of your comment seems to be an argument against getting answers from the Internet in general these days. A person doing research is just as likely as an agent to come across bogus LLM content, and a person also isn’t getting actual real world data when they are researching on the Internet.

Well... No... My point is that LLMs (or your agents you are going on about, which are just LLMs, that where able to populate their context with content or random internet searches) are making research generally more difficult. But it is still possible. Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don't just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

But I do hope that people will become more aware of these issues, and learn the limits of LLMs. So that they know they cannot rely on them. I really wish that copy&pasting LLM generated content without validating and correcting its output will stop.

[–] Not_mikey@lemmy.dbzer0.com 1 points 2 days ago

If some newspaper start printing LLM generated slop news articles. Would you say, "It is the responsibility of the reader to research if anything in it is true or not?" No, its not! A civilization is build on trust, and if you start eroding that kind of trust, people will more and more distrust each other.

As long as the newspaper clearly says it's written by an LLM that's fine with me. I can either completely ignore it or take it with a grain of salt. Truth is built on trust but trust should be a spectrum, you should never fully believe or fully dismiss something based on its source. There are some sources you can trust more than others, but there should always be some doubt. I have a fair amount of trust in LLMs because in my experience most of the time they are correct, I'd trust them more than something printed in Breitbart but less than something printed in the New York Times, but even with the new York times I watch out for anything that seems off.

You, along with most of this sub, seem to have zero trust in LLMs, which is fine, believe what you want. I'm not going to argue with you on that because I'm not going to be able to change your mind just as you won't be able to change Trump's mind on the new York times. I just want you to know that there are people who do trust LLMs and do think their responses are valuable and can be true.

If I want to talk to a person, and ask for their input on a matter. I want their input, not them asking their relatives or friends. I think this is just normal etiquette and social assumptions. This is true even before LLMs where a thing.

I don't think this is universal, that may be your expectation, but assuming it's not something private or sensitive I'd be fine with my friend asking a third party. Like if I texted in a group chat that I'm having car troubles and asked if anyone knows what's wrong I would not be offended if one of my friends texted back that they're uncles a mechanic and said to try x. I would be offended if that person lied about it coming from their uncle or lied about their uncle being a mechanic, but in this case the person was very clear about the source of the information they got and it's "credentials". Part of the reason I may be asking someone something is if they don't know the answer they may know someone who knows the answer and forward it on to them.

Nowadays you can no longer trust the goodwill on researchers, you have to get to the bottom of it. Looking up statistics, or doing your own experiments, etc. A person is generally superior to any LLM agent, because they can do that. People in a specific field understand the underlying rules, and don't just produce strings of words, that they make up as they go. People can research the reputation of certain internet sites, and look further and deeper.

I don't think this is true for every person, maybe for experts, but an AI agent is probably just as good as a layman on doing online research. Yes if you can ask an expert in the field to do the research for you they will be better then an AI agent but that's rarely an option, most of the time it's going to be you by yourself, or if your lucky a friend with some general knowledge of the area googling something and looking through the top 3-5 links and using those to synthesize an answer. An AI agent can do that just as well and may have more "knowledge" of the area than the person. Like chatgpt knows more about say the country of Bhutan then your average person, probably not as much as a Bhutanese person, but you probably don't know a Bhutanese person and can't ask them the question. It can even research the sources themselves or use a tool that rates the trustworthiness of a source to inform which one is true in a contradiction.