this post was submitted on 12 Dec 2025
95 points (98.0% liked)

Fuck AI

4834 readers
1110 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
95
Red Hat pushing AI (fedoramagazine.org)
submitted 1 day ago* (last edited 1 day ago) by vogi@piefed.social to c/fuck_ai@lemmy.world
 

Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can't tell if OP is responding with LLM as well–it would be really embarrassing if so.

PS: Debian is really cool btw :)

you are viewing a single comment's thread
view the rest of the comments
[–] TipsyMcGee@lemmy.dbzer0.com 1 points 21 hours ago (1 children)

I have been using gpt-oss:20b for helping me with bash scripts, so far it’s been pretty handy. But I make sure to know what I’m asking for and make sure I understand the output, so basically I might have been better off with 2010-ish Google and non-enshitified community resources.

[–] brucethemoose@lemmy.world 1 points 19 hours ago* (last edited 19 hours ago)

Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.

It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.

Contrast that with Red Hat’s examples.

They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.

Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.

Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.


Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.