Seems better that they rather live within their own community with likeminded people that within the general population. Win-win.
Perspectivist
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
That would by definition mean it's not superintelligent.
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
If you’re genuinely interested in what “artificial superintelligence” (ASI) means, you can just look it up. Zuckerberg didn’t invent the term - it’s been around for decades, popularized lately by Nick Bostrom’s book Superintelligence.
The usual framing goes like this: Artificial General Intelligence (AGI) is an AI system with human-level intelligence. Push it beyond human level and you’re talking about Artificial Superintelligence - an AI with cognitive abilities that surpass our own. Nothing mysterious about it.
I thought I was being perfectly polite in my response. I don't understand how this place can be so full of jerks.
I’m not justifying it - just explaining why it happens. If you walk into a male-dominated space and start lecturing people about gender pronouns, this is the reaction you’re going to get. If they’d stuck to talking about cars, it wouldn’t have happened in the first place.
It’s also worth noting that some languages don’t have gendered pronouns at all - Finnish, for example. We use the same pronoun for men and women, so when I say “he” in English, I might be referring to a man or just a person in general. I know that’s not grammatically correct in English, but for me - and probably for a lot of others - “he” doesn’t automatically mean male. It’s just how the pronoun from our native language maps over.
"Study my brain. I'm sorry," Tisch quoted Tamura as having written in the note. The commissioner noted that Tamura had fatally shot himself in the chest.
Didn't shoot himself in the head to preserve the brain. Reminds me of the "Texas Tower Shooter" Charles Whitman.
In his note, Whitman went on to request an autopsy be performed on his remains after he was dead to determine if there had been a biological cause for his actions and for his continuing and increasingly intense headaches.
During the autopsy, Dr. Chenar reported that he discovered a pecan-sized brain tumor, above the red nucleus, in the white matter below the gray center thalamus, which he identified as an astrocytoma with slight necrosis.
I've heard a neuroscientist talk about this and conclude that this tumor could very well have been the cause for his behavior.
Deleted / Wrong thread.
Upon closer inspection, the chisel end seems to have a lower angle than the knife edge. It’s a relatively thick carbon steel blade, so I’m not worried about durability, and it should be easy to sharpen. I’ll probably just keep the chisel end sharp and leave the blade itself dull.