this post was submitted on 21 Jul 2025
667 points (98.5% liked)
Technology
319 readers
343 users here now
Share interesting Technology news and links.
Rules:
- No paywalled sites at all.
- News articles has to be recent, not older than 2 weeks (14 days).
- No videos.
- Post only direct links.
To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:
- Al Jazeera.
- NBC.
- CNBC.
- Substack.
- Tom's Hardware.
- ZDNet.
- TechSpot.
- Ars Technica.
- Vox Media outlets, with exception for Axios(Due to being ad free.)
- Engadget.
- TechCrunch.
- Gizmodo.
- Futurism.
- PCWorld.
- ComputerWorld.
- Mashable.
- Hackaday.
- WCCFTECH.
More sites will be added to the blacklist as needed.
Encouraged:
- Archive links in the body of the post.
- Linking to the direct source, instead of linking to an article talking about the source.
founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Me_(A)irl
It’s been trained on Junior Devs posting on stack overflow
How does an AI panic?
And that’s a quality I look for in a developer. If something goes horribly wrong do you A) immediately contact senior devs and stakeholders, call for a quick meeting to discuss options with area experts? Or B) Panic, go rogue, take hasty ill advised actions on your own during a change freeze without approval or supervision?
it doesn’t. it after the fact evaluates the actions, and assumes an intent that would get the highest rated response from the user, based on its training and weights.
now humans do sorta the same thing, but llm’s do not appropriately grasp concepts. if it weighed it diffrent it could just as easily as said that it was mad and did it out of frustration. but the reason it did that was in its training data at some point connected to all the appropriate nodes of his prompt is the knowledge that someone recommended formatting the server. probably as a half joke. again llm’s do not have grasps of context
Its trained to mimic human text output and humans panic sometimes, there are no other reasons for it.
Actually even that isn't quite right. In the model's training data sometimes there were "delete the database" commands that appeared in a context that vaguely resembled the previous commands in its text window. Then, in its training data when someone was angrily asked why they did something a lot of those instances probably involved "I panicked" as a response.
LLMs cannot give a reason for their actions when they are not capable of reasoning in the first place. Any explanation for a given text output will itself just be a pattern completion. Of course humans do this to some degree too, most blatantly when someone asks you a question while you're distracted and you answer without even remembering what your response was, but we are capable of both pattern completion and logic.