this post was submitted on 31 Jul 2025
381 points (96.8% liked)

Comic Strips

18481 readers
2059 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
top 23 comments
sorted by: hot top controversial new old
[–] samus12345@sh.itjust.works 19 points 1 day ago
[–] phoenixz@lemmy.ca 4 points 21 hours ago

Do you remember when we were all wanting to be careful with AI, and not just proliferate the thing beyond any control?

It was only a few years ago, but pepperidge farm remembers

[–] kalistia@sh.itjust.works 2 points 21 hours ago
[–] DarkCloud@lemmy.world 38 points 1 day ago (4 children)

It is funny watching people claim AGI is just around the corner so we need to be safe with LLMs

...when LLM can't keep track of what's being talked about, and their main risks are: Covering the internet with slop and propaganda, and contributing to claime change. Both of which are more about how we use LLMs.

[–] scratchee@feddit.uk 3 points 22 hours ago

The difference between LLMs and human intelligence is stark. But the difference between LLMs and other forms of computer intelligence is stark too (eg LLMs can’t do fairly basic maths, whereas computers have always been super intelligences in the calculator domain). It’s reasonable to assume that someone will figure out how to make an LLM that can integrate better with the rest of the computer sooner rather than later, and we don’t really know what that’ll look like. And that requires few new capabilities.

The reality is we don’t know how many steps between now and when we get AGI, some people before the big llm hype were insisting quality language processing was the key missing feature, now that looks a little naive, but we still don’t know exactly what’s missing. So better to plan ahead and maybe arrive early at solutions than wait until AGI has arrived and done something irreversible to start planning for it.

[–] thevoidzero@lemmy.world 4 points 1 day ago (1 children)

The risk of LLMs aren't on what it might do. It is not smart enough to find ways to harm us. The risk seems from what stupid people will let it do.

If you put bunch of nuclear buttons in front of a child/monkey/dog whatever, then it can destroy the world. That seems to be what's LLM problem is heading towards. People are using it to do things that it can't, and trusting it because AI has been hyped so much throughout our past.

[–] bss03 4 points 23 hours ago (2 children)

LLMs are already deleting whole production databases because "stupid" people are convinced they can vibe code everything.

Even programmers I (used to) respect are getting convinced LLM are "essential". 😞

[–] anomnom@sh.itjust.works 2 points 21 hours ago

One of my former coders (good but super ADHD affected) was really into using it in the early iterations when GPT first gained attention. I think it steadily got worse as new revisions launched.

I’m too far from it to assess its usefulness at this stage, but know enough about statistics to question most of what it spits out.

Boilerplates work pretty much the same way and have usually been vetted by at least a couple good programmers.

[–] IndustryStandard@lemmy.world 1 points 21 hours ago (1 children)

They are useful to replace stackoverflow searches.

[–] bss03 1 points 21 hours ago (1 children)

I've not found them useful for that, even. I often just get "lied to" about any technical or tricky issues.

They are just text generators. Even the dumbest stack overflow answers show more coherence. (Tho, they are certainly wrong in other ways.)

[–] IndustryStandard@lemmy.world 1 points 21 hours ago

True but stackoverflow frequently lies to me as well.

[–] tacosanonymous@mander.xyz 10 points 1 day ago

Right but reliance on it is a way to destroy the world in the dumbest way. I don’t mean in the robot apocalypse way but the collapse of most societies. Without reliable information, nothing can get done. If shitty llms get put into everything, there's no government, no travel, no grid/infrastructure and logistics of every kind are gone.

While it’s fun to think about living in a small, self-sufficient community, we are not prepared for that and certainly not at this pace.

[–] sxan@midwest.social 1 points 1 day ago

Maybe that's the risk. That we design it to be benevolent, but it destroys us through sheer stupidity.

It's one way to get monkey paw wishes. "AI, solve climate change!" "Ok! Eliminating all humans now!"

[–] JohnWorks@sh.itjust.works 23 points 1 day ago

The risk is worth it for bideo games

[–] paraphrand@lemmy.world 2 points 1 day ago