this post was submitted on 29 Aug 2025
535 points (98.5% liked)
Technology
74585 readers
3891 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs, with a little coaxing, perform well at returning well formed JSON.
They do, my concern is more about if that JSON is correct, not just well-formed.
Also, 18000 waters might be correct JSON, but makes an AI a bad cashier.
There is a lot more that goes into it than just being correct. 18000 waters may have been the actual order, because somebody decided to screw with the machine. A human who isn't terminally autistic would reliably interpret that as a joke and would simply refuse to punch that in. The LLM will likely do what a human tells it to do, since it has no contextual awareness, it only has the system prompt and whatever interaction with the user it had so far.
So they just trim the instructions so it doesn't take joke orders, so it can make more reasonable decisions, like:
"May I take your order?"
"Two double whoppers with extra mayo and a chocolate cherry banana sundae"
"Oh you've GOTTA be joking!"
It's trivial to get LLMs to act against the instructions