this post was submitted on 12 Dec 2025
95 points (98.0% liked)
Fuck AI
4834 readers
1110 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
See all the errors in that rambling wall of slop (which they posted and didn’t even check for some reason?)
Trying to use a local LLM… could be worse. But in my experience, small ones are just too dumb for stuff beyond fully automated RAG or other really focused cases. They feel like fragile toys until you get to 32B dense or ~120B MoE.
Doubly so behind buggy, possibly vibe coded abstractions.
The other part is that Goose is probably using a primitive CPU-only llama.cpp quantization. I see they name check “Ryzen AI” a couple of times, but it can’t even use the NPU! There’s nothing “AI” about it, and the author probably has no idea.
I’m an unapologetic local LLM advocate in the same way I’d recommend Lemmy/Piefed over Reddit, but honestly, it’s just not ready. People want these 1 click agents on their laptops and (unless you’re an enthusiast/tinkerer) the software’s simply not there yet, no matter how much AMD and such try to gaslight people into thinking it is.
Maybe if they spent 1/10th of their AI marketing budget on helping open source projects, it would be…
I have been using gpt-oss:20b for helping me with bash scripts, so far it’s been pretty handy. But I make sure to know what I’m asking for and make sure I understand the output, so basically I might have been better off with 2010-ish Google and non-enshitified community resources.
Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.
It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.
Contrast that with Red Hat’s examples.
They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.
Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.
Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.
Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.