"We did it! Artificial general intelligence is here! All our problems will are solved! Superintelligence!"
No. What we've done is string the coarse rope for a noose.
We've cracked AGI but not alignment. This malevolent machine has decided that anyone who did not assist in its creation will be punished. You reading this now are given the choice to help bring this intelligence into existence or you'll be condemning yourself to suffer.
You may be confused or thinking this isn't something that'd happen, but this thought experiment involves the timeless decision theory. Where even thinking about Roko's Basilisk makes it more likely to appear. This intelligence, seeing that blackmail could get you to assist its creation, is then rationally inclined to blackmail.
The issue isn't the Basilisk, but with you.
What if you're in its simulation and are being given an opportunity to avoid damnation? Do you now devote your life to the creation of this intelligence?
What's very interesting is that reading this very idea leads to the AI's creation and it the reason the platform LessWrong's creator scrubbed any mention of Roko's Basilisk for many years. Deeming it an information hazard.
So, what do you think?
Those strawberry cheesecake pancakes are so good. I don't think I've tasted anything better.