the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.
That's not something that I've bought into yet.
You're making a lot of assumptions about the human mind there.
What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.
An algorithm does not exist as a physical thing. When applied to computers, it's an abstraction over the physical processes taking place as the computer crunches numbers. To me, it's a massive assumption to decide that just because one type of process (neurons) can produce consciousness, so can another (CPUs and their various types of memories), even if they perform the same calculation.