this post was submitted on 20 Feb 2026
70 points (100.0% liked)

Technology

42290 readers
135 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] XLE@piefed.social 32 points 3 hours ago

If a person is going to be blamed, it should be the one that mandated use of the AI systems.... Because that's exactly what Amazon was doing.

[–] Soulphite@reddthat.com 27 points 3 hours ago (1 children)

Talk about an extra slap in the fuckin face... getting blamed for something your replacement did. Cool.

[–] Tharkys@lemmy.wtf 15 points 3 hours ago (1 children)

That's in the SOP for management.

[–] Soulphite@reddthat.com 5 points 3 hours ago (1 children)

True. In this case, these poor saps being tricked into "training" these AI to eventually render their jobs obsolete.

[–] pinball_wizard@lemmy.zip 1 points 2 hours ago

Yes. "obsolete" in that Amazon doesn't give a shit about reliability anymore, so an AI reliability engineer is fine, now. Haha.

[–] pinball_wizard@lemmy.zip 3 points 1 hour ago

described the outages as “small but entirely foreseeable.”

LMAO

Would said employees have voluntarily used the agent if Amazon didn't demand it? If no, this isn't on them. They shouldn't be responsible for forced use of unvetted tools.

[–] melroy@kbin.melroy.org 4 points 2 hours ago (1 children)
[–] LurkingLuddite@piefed.social 2 points 2 hours ago (1 children)

It's working great to convince moronic executives to leave Windows when it fucks up majorly due to AI coding, which is a win for everyone.

[–] Powderhorn@beehaw.org 1 points 1 hour ago

I mean, I'll applaud any push toward Linux.

[–] AllNewTypeFace@leminal.space 1 points 2 hours ago

AI can never fail, it can only be failed

[–] Petter1@discuss.tchncs.de 0 points 2 hours ago (2 children)

Well, AI code should be reviewed prior merge into master, same as any code merged into master.

We have git for a reason.

So I would definitely say this was a human fault, either reviewer’s or the human’s who decided that no (or AI driven) review process is needed.

If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

[–] pinball_wizard@lemmy.zip 7 points 1 hour ago* (last edited 1 hour ago)

If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing

And you would get burned. Today's AI does one thing really really well - create output that looks correct to humans.

You are correct that mandatory review is our best hope.

Unfortunately, the studies are showing we're fucked anyway.

Because whether the AI output is right or wrong, it is highly likely to at least look correct, because creating correct looking output is where (what we call "AI", today) AI shines.

[–] Limerance@piefed.social 4 points 2 hours ago

Realistically what happens is the code review is done under time pressure and not very thoroughly.