this post was submitted on 29 Aug 2025
491 points (98.4% liked)

Technology

74585 readers
3951 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Prove_your_argument@piefed.social 23 points 3 hours ago* (last edited 3 hours ago) (1 children)

Why would this cause them to rethink anything?

If someone trolls an order of thousands of something, a worker isn't going to just make that thing. I get that retail workers are treated like shit and are paid shit so have zero shits to give. If someone rolls up to the drive through window asking for their thousands of waters or whatever, the people working there are gonna escalate it to a manager or just tell the guy to go pound sand.

Anybody today can go to any drivethrough and ask for whatever and then simply drive away. I'm certain it happens from time to time, even from legitimate orders when someone discovers they leave their wallet at home. If it was a great problem though these businesses simply wouldn't order drive through service, or would require payment before cooking anything.

[–] finitebanjo@lemmy.world 14 points 3 hours ago (1 children)

Because it costed them money, lol. The suits upstairs gave a quote in the article talking about how they will withdraw AI from all 500 locations they were implemented, and it also talks about how McDonalds did the exact same little dance over a year ago.

[–] Prove_your_argument@piefed.social 4 points 3 hours ago (3 children)

The mcdonalds thing was because the model they implemented was misinterpreting people and incorrectly placing orders. Yeah, obviously the thing wasn't working right so they pulled that. Sounds just like early personal assistants on phones and other devices, hell my wife still struggles with those. They clearly needed more time developing and testing it with a diverse range of customers from all over. I don't know if they trained it using recordings from real drive throughs from all over, but they should have.

The 18000 water example probably didn't cost anyone anything. Regardless of if it was intentional or not, it wouldn't have been fulfilled as part of an order. They mention it "crashing the system" - whatever that means in this context is impossible to know. Did it take down all of taco bell? Did it cause the LLM to stop responding on JUST this one site? All of them? Did it eventually time out and start working right? it's impossible to know because the details just aren't there and we have no insight as to the system architecture. I always assume there is a method to rely on traditional ordering where a person listening in while the chatbot talks to the person can take over and fix the problem. It's not like there aren't drive through workers still there.

[–] finitebanjo@lemmy.world 2 points 2 hours ago

Even if it's only a receipt for 18,000 waters or it fills up a screen it costs them time and resources.

Every single AI halucinates, always has and always will. It's useless for this.

[–] Prove_your_argument@piefed.social 2 points 3 hours ago (2 children)

Really the only cost here is the impact to consumer attitudes towards taco bell and AI because the video and news of this is circulating. One error is whatever, but public perception doesn't typically involve much critical thinking.

People are still irrationally terrified of all manner of technology even though science backs it up, like vaccines.

[–] chonglibloodsport@lemmy.world 1 points 8 minutes ago

What do you mean science backs it up? Science is finding massive social problems with technology all the time. Social media and its negative impacts on mental health (especially for teen and preteen girls), for example. Microplastics everywhere, for another. Climate change anyone?

[–] finitebanjo@lemmy.world -1 points 2 hours ago (1 children)

Unlike vaccines, AI has no use case and is always a net negative.

[–] Prove_your_argument@piefed.social 3 points 41 minutes ago

I just don't agree man. It won't do what most people want it to do, it doesn't at all work like some kind of science fiction "AI" that we classically think of. It's great at organizing patterns and helping create models to do a specific use case, but when you try to do some real convoluted multilevel thing it just doesn't.

We've been using ML for a ton of tools in tech for a long time. Crowdstrike, Darktrace and Abnormal are all very successful in the realm of what they do thanks to ML (aka "AI".)

OCR has been used for so long and has gotten really fucking good, thanks to ML.

I don't think we're gonna replace humans for thinking, but we can definitely replace them for boring repetitive actions.