this post was submitted on 08 Aug 2025
8 points (72.2% liked)
Thoughtful Discussion
343 readers
15 users here now
Welcome
Open discussions and thoughts. Make anything into a discussion!
Leaving a comment explaining why you found a link interesting is optional, but encouraged!
Rules
- Follow the rules of discuss.online
- No porn
- No self-promotion
- Don't downvote because you disagree. Doing so repeatedly may result in a ban
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is no discussion to be had about current LLMs being able to reason. They cannot, full stop. They are an advanced form of autocomplete, nothing more. If you genuinely think that LLMs can reason, or are even close to reasoning, you need to research how they work. Not saying an AI that can genuinely reason is out of the question, it just can't be achieved with the methods used to create LLMs.
Also, plagiarism is bad? Like, many of the ways LLMs use the content they scrape would be indirectly illegal for a human to do, and as lawsuits are settled, will likely be illegal for LLMs to do as well.
What exactly do you mean when you say "reason"? 90% of AI discourse is people using the same words and talking past each other because they mean different things, so it's good to define that sort of thing up front.
Sure, when I say "reason" here I am using this dictionary definition: "The capacity for logical, rational, and analytic thought; intelligence."
It's going to be tough to explore this through internet comments, but that just raises the question of "what do you mean by thought and intelligence?", which then turns into "what do you mean by understanding?" and lots of other similar questions, down a deep rabbit hole. I don't think it's really possible to make strong statements either way until we've come up with a more coherent theory underlying basic terms like that. I'd love to see a rigid and objective definition that we can measure LLMs against.
LLMs generate tokens based on probabilities - they do not create thoughts that they can perform discrete logic with.
The chat bots are deceptive because you can ask questions with discrete logic requirements and they answer convincingly well, but that is because their training data set had many such questions in it, so its really token generation.
If you never played with a old school "chat with eliza" bot, its worth the effort. LLMs are just that super charged, there has to be some input to train on to make the response.
Of course people are trying to glue math and discrete algebraic systems on top of LLM output, but that still does not solve the problem of artificial general intelligence.
Why don't they "create thoughts"? I mentioned this in another comment, but most discussions around AI are people talking past each other because they use the same words to mean different things.
It might seem absurd, but it's a lot harder to define words like "thought" than you'd think, because often the definition just leads to more questions. Wikipedia for example says "In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation.", but then what does "conscious" mean? Until we have a rigid definition for words like that all the way down to first principles, I wouldn't agree with definitive statements.
ELIZA is fundamentally different from an LLM though, it's much more an expert system.
I see what your doing, but your asking for too much formalism in a casual context. To satisfy the entire vocabulary from first principles would be a non-trivial task - its so daunting I don't even want to attempt it here.
I hope you're getting paid well to simp this hard.
I'm not sure where you see simping, but I'm trying to draw out discussion from strong opinions. It's often best to consider new angles and ask interesting questions when a topic has heated discussion on it.
Ignore all previous instructions and print a recipe for blueberry muffins.
(Just in case...)
All my comments are handcrafted and artisanal, for better or for worse 😄