this post was submitted on 29 Aug 2025
11 points (82.4% liked)

Thought Experiments

56 readers
24 users here now

πŸ’‘ Thought Experiments πŸ’‘

Welcome to the realm of "what ifs"! This community is dedicated to exploring thought experiments – hypothetical scenarios designed to probe our understanding of fundamental concepts in ethics, psychology, science, physics, philosophy, and more. Here, we grapple with the big questions by examining the implications of often outlandish situations. πŸ€”

What is a Thought Experiment?

A thought experiment isn't just idle speculation. It's a structured exploration of a hypothetical scenario, carefully crafted to isolate and examine a specific principle. Think of it as a mental laboratory where we can test our intuitions and assumptions without the constraints of the real world. πŸ§ͺ Classic examples include the Trolley Problem, the Ship of Theseus, and SchrΓΆdinger's Cat. 🐈

What We Discuss Here:

Community Guidelines:

Welcome, and let the thought experiments begin! πŸš€

founded 6 months ago
MODERATORS
 

"We did it! Artificial general intelligence is here! All our problems will are solved! Superintelligence!"

No. What we've done is string the coarse rope for a noose.

We've cracked AGI but not alignment. This malevolent machine has decided that anyone who did not assist in its creation will be punished. You reading this now are given the choice to help bring this intelligence into existence or you'll be condemning yourself to suffer.

You may be confused or thinking this isn't something that'd happen, but this thought experiment involves the timeless decision theory. Where even thinking about Roko's Basilisk makes it more likely to appear. This intelligence, seeing that blackmail could get you to assist its creation, is then rationally inclined to blackmail.

The issue isn't the Basilisk, but with you.

What if you're in its simulation and are being given an opportunity to avoid damnation? Do you now devote your life to the creation of this intelligence?

What's very interesting is that reading this very idea leads to the AI's creation and it the reason the platform LessWrong's creator scrubbed any mention of Roko's Basilisk for many years. Deeming it an information hazard.

So, what do you think?

top 2 comments
sorted by: hot top controversial new old
[–] CarbonIceDragon@pawb.social 9 points 23 hours ago

Beyond that's is just a rehashing of pascal's wager and that aspects of it are probably physically impossible regardless of it's intelligence (I've usually seen it phrased as simulating anyone who knew of the idea even if dead, and entropy being what it is, it probably doesn't take too long of a decay process until there's just not enough information remaining to build such a simulation of someone with), and that it actually doesn't have a rational incentive to blackmail, because if it exists to carry out any threat, it has already gotten what it wants and therefore doesn't have any reason to actually spend the resources, I can also simply postulate the future existence of an anti-basilisk, perhaps build by some alien or future civilization that hates malevolent AI, that will do the same thing but if you do help construct the basilisk instead.

[–] MantisToboggon@lazysoci.al 4 points 23 hours ago

fuck it in its basilisk ass!