this post was submitted on 07 Mar 2024
269 points (91.9% liked)
Memes
1531 readers
1 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?
That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don't believe that this LLM is intelligent enough to be sentient. However, what I'm saying here isn't based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.
I think I agree.
This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they're told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.
Because it doesn't need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn't been proven to be that difficult.
Everything is abstract algebra.
Define "introspection" in an algorithmic sense. Is introspection looking at one's memories and analyzing current events based on these memories? Well, then all AI models "introspect". That's how learning works.