this post was submitted on 16 Jul 2025
1 points (100.0% liked)

Discuss

334 readers
1 users here now

Welcome

Open discussions and thoughts. Make anything into a discussion!

Leaving a comment explaining why you found a link interesting is optional, but encouraged!

Rules

  1. Follow the rules of discuss.online
  2. No porn
  3. No self-promotion

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] Opinionhaver@feddit.uk 4 points 2 weeks ago

One of the main issues in the current AI discussion is user expectations. Most people aren’t familiar with the terminology. They hear “AI” and immediately think of some superintelligent system running a space station in a sci-fi movie. Then they hear that ChatGPT gives out false information and conclude it’s not intelligent - and therefore not even real AI.

What they fail to consider is that AI isn’t any one thing. It’s an extremely broad term. It simply refers to any system designed to perform a cognitive task that would normally require a human. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. Narrow AI can have superhuman cognitive abilities, but only within the specific task it was built for, like playing chess.

A large language model like ChatGPT is also a narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. It often gets things right - not because it knows anything, but because its training data contains a lot of correct information. That accuracy is an emergent byproduct of how it works, not its intended function.

What people expect from it, though, isn’t narrow intelligence - it’s general intelligence: the ability to apply cognitive ability across a wide range of domains, like a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but AGI and LLMs are not the same thing, even though both fall under the umbrella of AI.

[–] m_f@discuss.online 2 points 2 weeks ago* (last edited 2 weeks ago)

I think it would be good when discussing AI to step back and ponder what you mean by words like "intelligent", "conscious", etc. If you assert that the current crop of AI models aren't intelligent with a concrete, objective definition, then you should also admit that you're asserting that because you want it to be true, not because it's been proven so. If you start invoking qualia, then you've already fallen for a philosophical trap.

I think this article is interesting, because it provides some food for thought on that, "Intelligence is best measured by outcomes". Similar argument to "I don't care if it's intelligent or not, what matters is that it's useful"