this post was submitted on 24 Feb 2022
2 points (100.0% liked)

Linguistics

848 readers
1 users here now

This community was migrated to !linguistics@mander.xyz (Kbin link).

founded 5 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] lvxferre@lemmy.ml 1 points 2 years ago (1 children)

I've seen this video; heavily recommend it.

In a nutshell, what computers can't reliably understand is anything that relies on "world knowledge" - that is, knowledge outside the language that you need to know to parse it correctly. Stuff like "apples fall down, not up", "a container needs to be bigger than the item that it contains", stuff like this.

Note that the common NLP (natural language processing) methods do not even try to address this, as they often rely on brute force - "if you feed enough language into the computer, it'll eventually get it".

[–] roastpotatothief@lemmy.ml 0 points 2 years ago (1 children)

Since I posted this, Microsoft claims to have built an AI which does have "world knowledge". It was able to explain how to pile up some objects so they don't fall. We'll see though if its claims are true.

[–] lvxferre@lemmy.ml 1 points 2 years ago

I'd take claims from Microsoft with heavy scepticism; they tend to heavily overrate the capabilities of their own software. However, if it is true and accurate, it's an amazing development, and it might solve problems in the video, like:

  • The trophy doesn't fit in the bag because it₁ is too big.
  • The trophy doesn't fit in the bag because it₂ is too small.

For us humans it's trivial to disambiguate it₁ as the trophy and it₂ as the bag, because we know stuff like "objects only fit in containers bigger than themselves". Algorithms usually don't.