this post was submitted on 29 Mar 2026
58 points (96.8% liked)

Technology

42605 readers
508 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] GregorGizeh@lemmy.zip 69 points 3 days ago (2 children)

"Rogue AI" as if it's some sentient evil thing when its just a llm with too many permissions... This timeline is so dystopian, but simultaneously incredibly lame i hate it.

[–] Hirom@beehaw.org 14 points 3 days ago* (last edited 3 days ago)

It shows LLMs can do significant harm without the capabilities of an AGI.

Overhyping LLMs and overinflating their capabilities makes things worse, as people are less skeptical of LLM output.

[–] Kirk@startrek.website 3 points 3 days ago

It's also a pretty big exaggeration of what actually happened which is that it generated and posted some technically inaccurate information.

[–] Hirom@beehaw.org 31 points 3 days ago* (last edited 1 day ago) (2 children)

According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.

Producing innaccurate technical advice, with a confident tone, at scale.

If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

[–] timwa@lemmy.snowgoons.ro 14 points 3 days ago (1 children)

That sounds sweetly naive. "Producing innaccurate technical advice, with a confident tone, at scale" sounds like the perfect credentials for a career in consultancy.

[–] Hirom@beehaw.org 7 points 3 days ago* (last edited 3 days ago) (1 children)

That's a good way to represent LLMs. Very bad and very prolific consultants.

[–] bryndos@fedia.io 2 points 2 days ago

Yeah, they'd get promoted for sure where i work.

[–] foxwolf@pawb.social 3 points 3 days ago

Wait til this starts happening in the construction industry.

[–] irelephant@lemmy.dbzer0.com 17 points 3 days ago (1 children)

An ai apocalypse won't come from an ai becoming sentient, but from some idiot putting ai where it shouldn't be.

[–] Kolanaki@pawb.social 10 points 3 days ago (1 children)

"We installed Gemini into all US nuclear silos."

[–] Butterbee@beehaw.org 15 points 3 days ago (1 children)

"Flagrant security lapse caused an incident when software engineer uses inappropriate tool for the job."

[–] sem@piefed.blahaj.zone 6 points 3 days ago* (last edited 3 days ago) (1 children)

"Inappropriate tool also weirdly good at gas lighting engineers and managers"

[–] randomwords@midwest.social 1 points 2 days ago

Let's be real, managers gaslight themselves daily.