this post was submitted on 16 Jul 2025
1 points (100.0% liked)

Discuss

334 readers
1 users here now

Welcome

Open discussions and thoughts. Make anything into a discussion!

Leaving a comment explaining why you found a link interesting is optional, but encouraged!

Rules

  1. Follow the rules of discuss.online
  2. No porn
  3. No self-promotion

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Opinionhaver@feddit.uk 4 points 2 weeks ago

One of the main issues in the current AI discussion is user expectations. Most people aren’t familiar with the terminology. They hear “AI” and immediately think of some superintelligent system running a space station in a sci-fi movie. Then they hear that ChatGPT gives out false information and conclude it’s not intelligent - and therefore not even real AI.

What they fail to consider is that AI isn’t any one thing. It’s an extremely broad term. It simply refers to any system designed to perform a cognitive task that would normally require a human. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. Narrow AI can have superhuman cognitive abilities, but only within the specific task it was built for, like playing chess.

A large language model like ChatGPT is also a narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. It often gets things right - not because it knows anything, but because its training data contains a lot of correct information. That accuracy is an emergent byproduct of how it works, not its intended function.

What people expect from it, though, isn’t narrow intelligence - it’s general intelligence: the ability to apply cognitive ability across a wide range of domains, like a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but AGI and LLMs are not the same thing, even though both fall under the umbrella of AI.