this post was submitted on 27 Mar 2026
165 points (88.7% liked)

Fuck AI

6568 readers
1719 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it's bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren't even tools. They're just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

(page 2) 39 comments
sorted by: hot top controversial new old
[–] rabber@lemmy.ca 5 points 5 days ago (2 children)

I recently used it to install Nvidia l40s drivers on redhat 9 and pass it through to my Frigate instance. Took me a few minutes. Would have been a lot of reading to find the exact answers manually.

load more comments (2 replies)
[–] TankovayaDiviziya@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

It's situation specific. For tabulating data, yes. For everything else, probably not. But the thing is, you have to ask LLM if it can read the raw data to confirm if it is reading it right, before ordering it to execute more complex commands and tasks. You have to define the parameters one by one, one query everytime.

[–] HarneyToker@lemmy.world 2 points 4 days ago

For every post I see of people complaining, I have to imagine there are 100 other people that get value out of LLMs quietly.

[–] AA5B@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

While I also don’t see how it’s productive, it can be useful for certain things, certain steps. But it really seems like you need to have the knowledge in question to help it do a good job.

People underestimate how much handholding it needs. You can tell it to do something and it might but you may not like the results. However with a bit of interaction or setting context, it might. The pretentious are calling it “prompt engineering” but it’s a combination of asking ai questions and modifying your terminology until it does what you want

People also don’t seem to understand ai really puts a premium on evaluation. You don’t see it being written but you own it, so you really need to look through the result in detail to understand whether it’s what you wanted. I see this in code a lot where the LLM produces something but a junior developer doesn’t have the skill to evaluate it before committing to source control

[–] CookieOfFortune@lemmy.world 1 points 4 days ago (1 children)

Have you tried using skills/workflows? You can improve its context over time.

load more comments (1 replies)
[–] jj4211@lemmy.world -1 points 3 days ago* (last edited 3 days ago) (2 children)

I think it really depends on the task.

There are folks who manage to have their whole careers be basically put stuff from documentation and stack overflow to implement very basic stuff over and over again, and pray it works and doesn't need debugging. They hate coding, but it was heralded as a doctor/lawyer level pay but way easier to get into. LLMs can largely replace the work for those. These are humans I would never have trusted with anything significant, and only have them low stakes low risk stuff to keep them busy because management wants them utilized. Sure they spend a month to fall at delivering something basic, but management is happy enough.

Then there are folks who mostly live in code that is needed because it truly doesn't already exist. Those folks will find LLM relatively less useful. Now those folks do end up with braindead chores on occasion. Change from library x to y because whole they both do the same thing, x got discontinued. LLM can be useful at accelerating that because it's just so obvious but not quite as simple as search and replace. Or if you want to define some argv parsing you can let a codegen do that because it's easy but tedious.

To go back to the days of car analogies. Imagine some tech people got the world excited because they created tools to automate engineering in motor sports. People even come out saying how it helped them engineer their own vehicle and stories saying the most prolific motor racing is being taken over. You as an F1 engineer see it as mostly useless, but everyone keeps talking about it's going to replace engineers. Turns out everyone is actually taking about go karts and it is true that it works ok for that and that go karts are way more common than F1.

The problem is that to the world, programming is programming without distinction, and even the people in charge of the F1 type work don't know the difference because they were never technical either.

load more comments (2 replies)
[–] Azzu@lemmy.dbzer0.com 1 points 4 days ago* (last edited 4 days ago)

LLMs do a terrible job, however many things they're used for are so straightforward or unimportant that a terrible job is still "good enough".

[–] Not_mikey@lemmy.dbzer0.com 1 points 4 days ago* (last edited 4 days ago) (4 children)

Claude and super powers / planning have changed my mind more on AI feature development. Iterating on the spec and making it as unambiguous as possible gives good results when you clear context and have it implement the plan. Even if it starts to stray you can just do a git reset and start a new session with the spec, adjusting it a bit, because time wise you probably haven't invested much.

It also depends on the code base, if the code base has very clear separation of concerns, good documentation, and good contracts between layers then claude can handle it pretty well. If the code base is full of spaghetti code with multiple ways to do the same thing then AI will struggle with it. In our large legacy monolith repo it doesn't do well, in our micro service repos it does great.

Also time wise it may not seem like a benefit if you just set it and wait for it to complete, the productivity advantage comes from running a couple sessions in parallel.

Also context is key, having a good claude.md file in the repo to explain patterns helps it to avoid pitfalls. If it's only context is the prompt you gave it and you tell it to implement a feature without a plan / spec outlined it will generate shit code.

load more comments (4 replies)
[–] homesweethomeMrL@lemmy.world 1 points 5 days ago

Fwiw, when I limit it to creating outlines based on given source docs or summarizing transcripts it does fine.

Definitely not worth what it cost to get there, but useful enough in those strict scenarios.

[–] okwhateverdude@lemmy.world -1 points 5 days ago (1 children)

Is your work paying for dumb robots? Like Atlassian's shit? Or something built-in to your industry software (I guess some kinda CAD)? These are next to useless. Or is work only paying for basic model access? Also pretty useless for detailed work. The only models that give me consistent, detailed-ish work are the state of the art models. And even then you have to watch them, or have very strong verification/validation so they can bash their head against to eventually get the right result.

I'll say that I am not so much a booster, but more of a pragmatist. After the step change in quality this past fall/winter, I gave the SOTA models a try with hard earned cash. And it was worth it.

My ADHD makes it difficult to really finish personal projects once they get past the fun and interesting learning portion and neck deep into tedium of actually molding the code into the right shape or shaving the various yaks that came up. All motivation ceases. Unfortunately, my job is also my hobby. I don't wanna work after work, yo.

That game I always wanted to finish writing but got stuck at needing to grind out code? Done in an afternoon of carefully directing it. That programming language I spent significant amounts of time thinking and designing and getting the shitty PoC running but now needed to actually make it work? A week to the first version. Another to my first significant application written in that language which revealed flaws in my design for real use cases. Another to the next version with a conformance test suite which was then used along with the spec to do a complete reimplementation in another language. Another project was trying to "grow" a sorting algorithm expressed in a niche esoteric programming language using a genetic algorithm. Stuck at the point of needing to build the tools for analysis, needing a refactor to fix the poor persistence choices, just nothing but yaks to shave. Got it unstuck over a weekend and actually started to DO the damn experiment after spending so much time writing the esoteric lang interpreter and all of the experiment harness.

It is not perfect. It fucks up frequently. I have to really watch it and steer it. It loves mediocrity and shortcuts. All that said...

Like, holy shit. The amount of work I've finished or moved forward in two months is nothing short of miraculous given how many projects like these I have in various states of finished.

All I can say is that my experience aligned with your experience any time I needed to use a bolted-on AI to some product (Atlassian, Lucid, etc) but that does not reflect my experience when using SOTA models for real work.

load more comments (1 replies)
[–] infinitevalence@discuss.online -4 points 5 days ago (26 children)

It really depends on the task and the tool. Current MOE models that have agentic hooks can actually be really useful for doing automated tasks. Generally, you don't want to be using AI to create things. What you want to do is hand it a very clear set of instructions along with source material. And then tell it to either iterate build on summarize or in some cases create from that.

I created a simple script with the help of AI to automate scanning files in from an automatic document reader. Convert them to OCRD PDFs. Scan through the document properly. Title the file based on the contents, then create a separate executive summary and then add an index to a master index file of a growing json.

Doing this allowed me to automate several steps that would have taken time and in the end I'm able to just search through my folders and my PDFs and very quickly find any information I need.

And this is only scratching the surface. I wouldn't have AI write me a resume or write me an email or a book. I might use it to generate an image that I then give to a real artist saying this is kind of what was in my head.

But boring stuff repetitive stuff. Things that really benefit from automation with a little bit of reasoning and thinking behind it. That's where we are right now with AI.

load more comments
view more: ‹ prev next ›