this post was submitted on 03 Aug 2025
403 points (86.6% liked)
Fuck AI
3635 readers
1482 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sorry but you can deny and hate all you want, it’s not going anywhere
sadly. I don't have enough money to turn this shit-hose off.
Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it's just insane to claim it's the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.
There's another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.
LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate... but do you want to rely on them?
When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.
In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct -- to be accurate, to be responsible.
In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world's input, we get "barely getting by" as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!
Sadly, it's not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don't make any new things, AI will stagnate and die. Where will we be then?
There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn't just eat the billionaires*.
*Hermes-3 says cannibalism is "barbaric" in most cultures, but otherwise doesn't give convincing arguments.