... Did... did anyone actually expect an enormously complicated autocomplete to... reason?
Did ... is this somehow unexpected, somehow not the obvious result?
Cyber Security news and links to cyber security stories that could make you go hmmm. The content is exactly as it is consumed through RSS feeds and wont be edited (except for the occasional encoding errors).
This community is automagically fed by an instance of Dittybopper.
... Did... did anyone actually expect an enormously complicated autocomplete to... reason?
Did ... is this somehow unexpected, somehow not the obvious result?
There is no concrete definition of consciousness or intelligence, but we know when something seems like it. LLMs along with "AI" complexes (LLMs with access to other GAN modules or similar functions) smash the Turing test on multiple fronts at this point, that's enough for the question of "did we accidentally a mind?"
It's good to check every once in a while.
Chat bots have been "smashing" Turing tests for a long time.
Not really. For the last couple of years max. You people really don't remember cleverbot and how ridiculouly bad it was.
And yet my autocomplete is still abysmal, I don't get it
So aside from generating images of porn, there are no use cases where any level of reasoning, basic or abstract, or high precision is required?