this post was submitted on 23 Nov 2023
193 points (100.0% liked)

the_dunk_tank

15897 readers
1 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 5 years ago
MODERATORS
 

Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

top 50 comments
sorted by: hot top controversial new old
[–] Dirt_Owl@hexbear.net 94 points 2 years ago (7 children)

For fucks sake it's just an algorithm. It's not capable of becoming sentient.

Have I lost it or has everyone become an idiot?

[–] UlyssesT@hexbear.net 45 points 2 years ago (1 children)

Crude reductionist beliefs such as humans being nothing more than "meat computers" and/or "stochastic parrots" have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.

[–] daisy@hexbear.net 36 points 2 years ago (19 children)

This is verging on a religious debate, but assuming that there's no "spiritual" component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex "meat computers"?

[–] oktherebuddy@hexbear.net 37 points 2 years ago* (last edited 2 years ago) (12 children)

yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.

When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.

load more comments (12 replies)
load more comments (18 replies)
[–] Nevoic@lemm.ee 28 points 2 years ago (18 children)

I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.

That's not something that I've bought into yet.

[–] TraumaDumpling@hexbear.net 42 points 2 years ago* (last edited 2 years ago) (53 children)

so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here's my philosophical examination of the issue.

the thing is, we don't even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can't. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as 'illusory' - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this 'something' would be the 'consciousness' or 'sentience' or to put it in your oh so smug terms the 'soul' that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from 'what are qualia' to 'what are those illusory, deceitful qualia decieving'. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

Consider information processing, and the kinds of information processing that our brains/minds are capable of.

What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human's normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term 'philosophical zombie' comes from) There is no reason to assume that an information processing system that contains information about itself would have to be 'aware' of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

so the options we are left with in terms of conclusions to draw are:

  1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
  2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
  3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia ('soul'-ism as you might put it, but no 'soul' is required for this conclusion, it could just as easily be termed 'mystery-ism' or 'unknown-ism')

And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

load more comments (53 replies)
[–] stigsbandit34z@hexbear.net 22 points 2 years ago (5 children)

I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress shrug-outta-hecks

load more comments (5 replies)
[–] Tommasi@hexbear.net 21 points 2 years ago (4 children)

I don't know where everyone is getting these in depth understandings of how and when sentience arises.

It's exactly the fact that we don't how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don't even know how it works, so why are these AI hypemen so sure they got it figured out?

The only logical answer is that they don't and it's 100% marketing.

Hoping computer algorithms made in a way that's meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

load more comments (4 replies)
[–] Wheaties@hexbear.net 19 points 2 years ago

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn't mean we are close to anything.

Consider the complexity of sentient, multicellular organism. That's trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that's still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.

[–] Dirt_Owl@hexbear.net 18 points 2 years ago (16 children)

You're making a lot of assumptions about the human mind there.

load more comments (16 replies)
load more comments (13 replies)
load more comments (5 replies)
[–] HiddenLayer5@lemmy.ml 58 points 2 years ago* (last edited 2 years ago) (3 children)

They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn't come here to read shit from AI.

[–] TheCaconym@hexbear.net 43 points 2 years ago (2 children)

I've seen this several times now; they're treating the word-generating parrot like fucking Shalmaneser in Stand on Zanzibar, you literally see redd*tors posting comments that are basically "I asked ChatGPT what it thought about it and here...".

Like it has remotely any value. It's pathetic.

[–] UlyssesT@hexbear.net 29 points 2 years ago (1 children)

They simply have to denigrate living human brains so their treat printers seem more elevated. More special. cringe

[–] VILenin@hexbear.net 30 points 2 years ago (9 children)

One of them also cited fucking Blade Runner.

“You’re mocking people who think AI is sentient, but here’s a made up story where it really is sentient! You’d look really stupid if you continued to deny the sentience of AI in this scenario I just made up. Stories cannot be anything but literal. Blade Runner is a literal prediction of the future.”

Wow, if things were different they would be different!

[–] UlyssesT@hexbear.net 22 points 2 years ago (11 children)

You are all superstitious barbarians, whereas I get my logical rational tech prophecies from my treats smuglord

load more comments (11 replies)
load more comments (8 replies)
load more comments (1 replies)
[–] UlyssesT@hexbear.net 30 points 2 years ago (1 children)

They switched from worshiping Elon Musk to worshiping ChatGPT.

Some worship both now. Look at this euphoric computer toucher:

https://hexbear.net/comment/4293298

Bots already move packages, assemble machines, and update inventory.

ChatGPT could give you a summary of the entire production process. It can replace customer service agents, and support for shopping is coming soon.

Tesla revealed a robot with thumbs. They will absolutely try to replace workers with those bots, including workers at the factory that produces those bots.

Ignoring that because your gut tells you humans are special, and always beat the machines in the movies just means you will be blindsided when Tesla fights unioning workers with these bots. They'll use them to scab the UAWs attempts to get in, and will be working hard to get the humans at the bot factories replaced with the same bots coming out.

[–] WoofWoof91@hexbear.net 37 points 2 years ago (3 children)

ChatGPT could give you a summary of the entire production process

with entirely made up numbers

It can replace customer service agents

that will direct you to a non-existent department because some companies in the training data have one

and support for shopping is coming soon

i look forward to ordering socks and receiving ten AA batteries, three identical cheesegraters, and a leopard

[–] UlyssesT@hexbear.net 21 points 2 years ago

i look forward to ordering socks and receiving ten AA batteries, three identical cheesegraters, and a leopard

If AI Dungeon is any indication, you might get a stack of really gory vampire snuff porn novels too.

load more comments (2 replies)
[–] UlyssesT@hexbear.net 24 points 2 years ago (1 children)

They're in this thread, too. The very same "look at this hockey stick shaped curve of how awesome the treat printer is. The awesomeness will exponentially rise until the nerd rapture sweeps me away, you superstitious Luddite meat computers" euphoria.

load more comments (1 replies)
[–] GalaxyBrain@hexbear.net 44 points 2 years ago (28 children)

I'm not really a computer guy but I understand the fundamentals of how they function and sentience just isn't really in the cards here.

[–] boiledfrog@hexbear.net 29 points 2 years ago (10 children)

I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound

load more comments (10 replies)
load more comments (27 replies)
[–] CloutAtlas@hexbear.net 44 points 2 years ago (1 children)

Roko's Basilisk, but it's the snake from the Nokia dumb phone game.

[–] PointAndClique@hexbear.net 19 points 2 years ago (1 children)

... i liked the snake game :(

load more comments (1 replies)
[–] Abraxiel@hexbear.net 38 points 2 years ago (1 children)

I was gonna say, "Remember when scientists thought testing a nuclear bomb might start a chain reaction enflaming the whole atmosphere and then did it anyway?" But then I looked it up and I guess they actually did calculations and figured out it wouldn't before they did the test.

[–] VILenin@hexbear.net 26 points 2 years ago (1 children)

Might have been better if it did pika-cousin-suffering

No I’m not serious I don’t need the eco-fascism primer thank you very much

[–] mittens@hexbear.net 35 points 2 years ago* (last edited 2 years ago) (3 children)

I think it should be noted, that some of the members on the board of OpenAI are literally just techno-priests doing actual techno-evangelism, their job literally depends on this new god and the upcoming techno-rapture being perceived as at least a plausible item of faith. I mean it probably works as any other marketing strategy, but this is all in the context of Microsoft becoming the single largest company stakeholder on OpenAI, likely they don't want their money to go to waste paying a bunch of useless cultists so they started yanking Sam Altman's chain. The OpenAI board reacted to the possibility of Microsoft making budget calls, and outed Altman and Microsoft swiftly reacted by formally hiring Altman and doubling down. Obviously most employees are going to side with Microsoft since they're currently paying the bills. You're going to see people strongly encouraged to walk out from the OpenAI board in the upcoming weeks or months, and they'll go down screaming crap about the computer hypergod. You see these aren't even marketing lines that they're repeating acritically, it's what's some dude desperately latching onto their useless 6 figure job is screaming.

load more comments (3 replies)
[–] ksynwa_from_lemmygrad@hexbear.net 34 points 2 years ago (9 children)

I don't know if Reddit was always like this but all /r/ subreddits feel extremely astroturfed. /r/liverpoolfc for example feels like it is run by the teams PR division. There are a handful of criticcal posts sprinkled in so redditors can continue to delude themselves into believing they are free thinking individuals.

Also this superintelligent thing was doing well on some fifth grade level tests according to Reuter's anonymous source which got OpenAI geniuses worried about AI apocalypse.

[–] quarrk@hexbear.net 18 points 2 years ago

Reddit has always been astroturfed but it’s clearly increased since 2015 and especially in the last year since their IPO push

load more comments (8 replies)
[–] CrushKillDestroySwag@hexbear.net 32 points 2 years ago (1 children)

achieved a breakthrough in mathematics

The bot put numbers in a statistically-likely sequence.

[–] SkingradGuard@hexbear.net 26 points 2 years ago (2 children)

I swear 99% of reddit libs so-true don't understand anything about how LLMs work.

load more comments (2 replies)
[–] WholeEnchilada@hexbear.net 25 points 2 years ago (12 children)

The saddest part of all is that it looks like they really are wishing for real life to imitate a futuristic sci-fi movie. They might not come out and say, "I really hope AI in the real world turns out to be just like in a sci-fi/horror movie" but that's what it seems like they're unconsciously wishing for. It's just like a lot of other media phenomena, such as real news reporting on zombie apocalypse preparedness or UFOs. They may phrase it as "expectation" but that's very adjacent to "hopeful."

load more comments (12 replies)
[–] 2Password2Remember@hexbear.net 24 points 2 years ago (1 children)

372 comments

what

Death to America

load more comments (1 replies)
[–] MerryChristmas@hexbear.net 22 points 2 years ago* (last edited 2 years ago) (17 children)

He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating as any of the bazinga takes on reddit-logo. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.

The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don't look for ways to utilize, subvert and counter these technologies while they're still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

[–] Dirt_Owl@hexbear.net 50 points 2 years ago

The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating

That's because it's not artificial intelligence. It's marketing.

[–] VILenin@hexbear.net 47 points 2 years ago (1 children)

Oh my god it’s this post again.

No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.

And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.

Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.

[–] UlyssesT@hexbear.net 21 points 2 years ago

I see it as low-key crybully shit to come here, dunk on Hexbears and call them names for not being "curious" enough about LLMs, and act like some disadvantaged aggrieved party while also standing closer to the billionaires' current position than anywhere near those they're raging at here.

[–] Tachanka@hexbear.net 32 points 2 years ago (1 children)

The hexbear party line toward LLMs

this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx's words with respect to stuff like this:

The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.

- Marx, Capital, Volume 1, Chapter 15

However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people's privacy or fabricate audio/video evidence is only going to get worse. I don't think it's possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.

load more comments (1 replies)
[–] UlyssesT@hexbear.net 27 points 2 years ago (14 children)

The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear

Definite what you mean by "curiosity." Is it also a "lack of curiosity" for people to dunk on and heckle NFT peddlers instead of entertaining their proposals?

is just as frustrating

Even at its extremes that I don't agree with myself, I disagree here. No, it is not just as frustrating.

No material analysis

Then bring some. Don't just say Hexbears suck because they're not "curious" enough about the treat printers.

is straight up reactionary

And hating on leftists in favor of your unspecified "curiosity" position is what exactly by comparison?

Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance

What does your "curiosity" propose that is actual resistance and not playing into their hands or even buying into the marketing hype?

load more comments (14 replies)
[–] Wheaties@hexbear.net 24 points 2 years ago

Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

As it stands, the capitalists already have the old means of information warfare -- this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text -- but with filters installed by communists, rather than the PR arm of a company? That won't be nearly as convincing as just talking and organizing with people in real life.

Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it's still just a programme on a server. Computers are very, very fragile. I'm just not too worried about it.

[–] GalaxyBrain@hexbear.net 19 points 2 years ago (5 children)

It's not a new means of production, it's old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It's dismissed because it's dismissals, there's no new technology here, it's a souped up chatbot that's been marketed like something else.

As far as machines being conscious, we are so far away from that as something to even consider. They aren't and can't spontaneously gain free will. It's inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don't understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.

load more comments (5 replies)
load more comments (11 replies)
[–] aaaaaaadjsf@hexbear.net 22 points 2 years ago

Redditors straight up quote marketing material in their posts to defend stuff, it's maddening. I was questioning a gamer on Reddit about something in a video game, and in response they straight up quoted something from the game's official website. Critical thinking and media literacy are dead online I swear.

[–] Monk3brain3@hexbear.net 20 points 2 years ago (3 children)

Whenever the tech industry needs a boost some new bullshit comes up. Crypto, self driving and now AI, which is literally called AI for marketing purposes, but is basically an advanced algorithm.

load more comments (3 replies)
[–] Hider9k@lemm.ee 19 points 2 years ago

Shit can’t even do my homework right.

[–] CrimsonSage@hexbear.net 18 points 2 years ago* (last edited 2 years ago) (3 children)

Honestly I don't know that you can have non embodied consciousness. These people are acting like they are replicating a thing that already exists, bur that's not the case. Everything we have ever come across that shows sentience, basically however you define it, is an organism of a sort and "consciousness is an adaptive aspect of that organism. I feel like asking a computer to convey is like asking a lake to talk. This isn't to say that I don't think it's possible for us to build synthetic consciousness in theory, only that what these people are marveling about is really order's of magnitude more simple than the simplest of organisms. Like our consciousness is FUNCTIONAL it serves a purpose as part if our totality, to separate that from being embodied, and even a brain in a jar is still embodied, seems like it's missing the point. Like to compare a software program that can just be slapped on whatever computer you want to human consciousness is a category error.

load more comments (3 replies)
load more comments
view more: next ›