this post was submitted on 27 Mar 2025
4 points (100.0% liked)

dynomight internet forum

84 readers
1 users here now

dynomight internet forum

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scottvr@lemmy.world 2 points 1 week ago

I enjoyed this piece - thoughtful, grounded, and refreshingly clear-eyed about the limits of a hypothetical "superintelligence." But I found myself bumping repeatedly on one implicit assumption throughout: the decoupling of the Being from its compute substrate.


If the Being is digital, why assume it is limited to the same tools and access as a human? That’s a philosophical convenience, not a technical constraint. Even today, we’re watching early LLM-based agents perform recursive tool use, call APIs, write and run code, and interact with infrastructure. In that light, the "Being" wouldn’t just think, it would act - and act through its environment.


At the very least, this is "tool use." At a higher level, it starts to look like cognition integrated with system control: bicameral or modular architectures where one part plans and reasons, while others carry out low-level execution, observation, or even hardware manipulation.


This opens the door to a Being that self-improves, self-instruments, and restructures its compute context over time. Not necessarily instantly—but it's not inert, either. If intelligence includes the ability to manipulate its own substrate, then the limiting factor isn’t intelligence per se, but how tightly it's coupled to the infrastructure it's running on.


In that light, a more provocative question might be:

“What architectures would let such a Being close the loop between thought and action faster than we expect?”

Thanks again for a great post. It triggered these and many more thoughts related to the boundaries of "mind" vs. "system."