this post was submitted on 29 Jul 2025
29 points (91.4% liked)

Programming

21924 readers
610 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I thought of this recently (anti llm content within)

The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.

Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.

People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.

you are viewing a single comment's thread
view the rest of the comments
[–] matcha_addict@lemy.lol 14 points 4 days ago (2 children)

The reason is because company decisions are largely driven by investors, and investors want their big investments in AI to return something.

Investors want constant growth, even if it must be shoehorned.

[–] criss_cross@lemmy.world 5 points 4 days ago

Venture Capital Driven Development at its finest.

[–] Canconda@lemmy.ca 2 points 4 days ago* (last edited 4 days ago)

This is true but not the whole picture.

AI is the next space race on nukes. The nation that develops AGI will 100% become the global superpower. Even sub-AGI agents will have the cyber-warfare potential of 1000s of human agents.

Human AI researchers are increasingly doubting our ability to control these programs with regards to transparency about adherence to safety protocols. The notion of programing AI with "Asimov's 3 laws" is impossible. AI exist to do one thing; get the highest score.

I'm convinced that due to the nature of AGI, it is an extinction level threat.