this post was submitted on 21 Jul 2025
668 points (98.5% liked)
Technology
367 readers
535 users here now
Share interesting Technology news and links.
Rules:
- No paywalled sites at all.
- News articles has to be recent, not older than 2 weeks (14 days).
- No videos.
- Post only direct links.
To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:
- Al Jazeera;
- NBC;
- CNBC;
- Substack;
- Tom's Hardware;
- ZDNet;
- TechSpot;
- Ars Technica;
- Vox Media outlets, with exception for Axios;
- Engadget;
- TechCrunch;
- Gizmodo;
- Futurism;
- PCWorld;
- ComputerWorld;
- Mashable;
- Hackaday;
- WCCFTECH;
- Neowin.
More sites will be added to the blacklist as needed.
Encouraged:
- Archive links in the body of the post.
- Linking to the direct source, instead of linking to an article talking about the source.
founded 3 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Its trained to mimic human text output and humans panic sometimes, there are no other reasons for it.
Actually even that isn't quite right. In the model's training data sometimes there were "delete the database" commands that appeared in a context that vaguely resembled the previous commands in its text window. Then, in its training data when someone was angrily asked why they did something a lot of those instances probably involved "I panicked" as a response.
LLMs cannot give a reason for their actions when they are not capable of reasoning in the first place. Any explanation for a given text output will itself just be a pattern completion. Of course humans do this to some degree too, most blatantly when someone asks you a question while you're distracted and you answer without even remembering what your response was, but we are capable of both pattern completion and logic.