this post was submitted on 05 Nov 2025
106 points (96.5% liked)
Fuck AI
4512 readers
1288 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, and how does that Tamil farmer fact check their black box audio interface when it tells them to spray Roundup on their potatoes, or warns them to buy bottled water because their Hindu-hating Muslim neighbors have poisoned their well, or any other garbage it's been deliberately or accidentally poisoned with?
One of the huge weaknesses of AI as a user interface is that you have to go outside the interface to verify what it tells you. If I search for information about a disease using a search engine, and I find an .edu website discussing the results of double blind scientific studies of treatments for a disease, and a site full of anti-Semitic conspiracy theories and supplement ads telling me about THE SECRET CURE DOCTORS DON'T WANT YOU TO KNOW, I can compare the credibility of those two sources. If I ask ChatGPT for information about a disease, and it recommends a particular treatment protocol, I don't know where it's getting its information or how reliable it is. Even if it gives me some citations, I have to check its citations anyway, because I don't know whether they're reliable sources, unreliable sources, or hallucinations that don't exist at all.
And people who trust their LLM and don't check its sources end up poisoning themselves when it tells them to mix bleach and vinegar to clean their bathrooms.
If LLMS were being implemented as a new interface to gather information - as a tool to enhance human cognition rather than supplant, monitor, and control it - I would have a lot fewer problems with them.