this post was submitted on 31 Jul 2025
381 points (96.8% liked)

Comic Strips

18495 readers
2506 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] DarkCloud@lemmy.world 38 points 2 days ago (9 children)

It is funny watching people claim AGI is just around the corner so we need to be safe with LLMs

...when LLM can't keep track of what's being talked about, and their main risks are: Covering the internet with slop and propaganda, and contributing to claime change. Both of which are more about how we use LLMs.

[–] thevoidzero@lemmy.world 4 points 1 day ago (5 children)

The risk of LLMs aren't on what it might do. It is not smart enough to find ways to harm us. The risk seems from what stupid people will let it do.

If you put bunch of nuclear buttons in front of a child/monkey/dog whatever, then it can destroy the world. That seems to be what's LLM problem is heading towards. People are using it to do things that it can't, and trusting it because AI has been hyped so much throughout our past.

[–] bss03 4 points 1 day ago (4 children)

LLMs are already deleting whole production databases because "stupid" people are convinced they can vibe code everything.

Even programmers I (used to) respect are getting convinced LLM are "essential". 😞

[–] anomnom@sh.itjust.works 2 points 1 day ago

One of my former coders (good but super ADHD affected) was really into using it in the early iterations when GPT first gained attention. I think it steadily got worse as new revisions launched.

I’m too far from it to assess its usefulness at this stage, but know enough about statistics to question most of what it spits out.

Boilerplates work pretty much the same way and have usually been vetted by at least a couple good programmers.

load more comments (3 replies)
load more comments (3 replies)
load more comments (6 replies)