this post was submitted on 29 Jul 2025
29 points (91.4% liked)

Programming

21924 readers
610 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I thought of this recently (anti llm content within)

The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.

Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.

People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.

you are viewing a single comment's thread
view the rest of the comments
[–] 30p87@feddit.org 10 points 4 days ago

See how it's apparently newsworthy that a simple chess engine on the C64 can beat ShitSkibidi. It was fucking obvious, to us. Like that random.randint(0, 10) is much worse at figuring out the sum of 2 and 4 than just calculating 2+4. However, it was not as obvious to the people that don't understand how ML/DL fundamentally works.

Similarly, it's sad to see a lot of projects that have to do with Machine Leaning being essentially killed and made worthless by people just throwing everything at ShitSkibidi instead of generating/collecting training data themselves and training a purpose built model, not text based. I see that in private as well as at work. They want to use "AI" in risk management now. Will that mean they'll use all their historical data on customers, the risks they identified and the final result to build two or more specific models? Most likely, no. They'll just throw all data at the internal ShitSkibidi wrapper, expect the resulting data to be usable at all, and then ask it how they should proceed. And then expect humans to actually fact check everything it returned.