audaxdreik

joined 2 years ago
[–] audaxdreik@pawb.social 14 points 2 weeks ago

Yeah. I really hate it when someone has what I would call a "bad take" and then I try and engage in conversation with them and they get downvoted to hell. Like sorry friend, I swear it wasn't me, I'm just trying to have a conversation too ☹️

[–] audaxdreik@pawb.social 26 points 2 weeks ago (2 children)

I can't stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It's a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.

I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,

Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes?

Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.


AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you've achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.

[–] audaxdreik@pawb.social 33 points 2 weeks ago

Reading on desktop and it timed almost perfectly for me. I finished the comic just in time to scan back to the first panel and catch him pop out of existence 😁

[–] audaxdreik@pawb.social 16 points 2 weeks ago

New York State Board of Elections

Probably because it's just posted on a .gov site.

[–] audaxdreik@pawb.social 9 points 2 weeks ago (1 children)

LLMs are a tool, and all tools can be repurposed or repossessed.

That's just simply not true. Tools are usually quite specific in purpose, and often times the tasks they accomplish cannot be undone by the same tool. A drill cannot undrill a hole. I'm familiar with ML (machine learning) and the many, many legitimate uses it has across a wide range of fields.

What you're thinking of, I suspect, is a weapon. A resource that can be wielded equally by and against each side. The pains caused on the common person by the devaluation of our art and labor can't be inflicted against the corpofascists; for them, that's the point. They are the ones selling these tools to you and you cannot defeat them by buying in. And I do very much mean the open source models as well. Waging war on their terms, with their tools and methods (repossessed as they may be) is still a losing proposition.

By ignoring this technology and sticking our fingers in our ears, we are allowing them to reshape out the technology works, instead of molding it for our own purposes. It’s not going to go away, and thinking that is just as foolish as believing the Internet is a fad.

Time will tell. How are your NFTs doing? (sorry, that was mean)

The negative preconceived notion bias is really not helping matters.

Guilty as charged, I'm pretty strongly anti-AI. But seriously, watch that ad and tell me that the disorienting cadence of speech and uncanny, overly detailed generated images look good? Most of us have seen what's on offer and we're telling you, we're tired.


Look, I do apologize, I'm very much trying not to be overly aggro here or attack you in any way. But I think discussions about the religious overtones and belief systems of the BJ are exactly where we're at.

How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI

This is a really interesting article. Gary Marcus is a lot more positive on AI than myself I think, but that's understandable given his background. If I do concede that some form of AGI is inevitable, I think we are within our rights to demand that it is indeed the tool we deserve, and not just snake oil.

AI art still ugly, sorry not sorry.

[–] audaxdreik@pawb.social 16 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Kind of really disagree with this video 😕

I've only read the first two Dune novels, and that awhile ago, so I'm poorly equipped to have this conversation, but the video focuses on the idea that fascists are perpetuating it to keep powerful tools of liberation out of the hands of the proletariat. You wouldn't agree with a fascist, would you? While there may be some truth to this, it completely ignores the cause of the BJ to begin with. It was in fact a rebellion by the people against those tools.

Even taken at face value, the video seems to posit that because the fascists can't be trusted, AI is indeed a powerful tool for liberation. I don't see that as the case. It hardly needs to be said, but Dune is a sci-fi novel, the context of which does not currently apply to our real world circumstances. AI is the tool of the fascists, used for oppression. I don't think it can simply be repurposed for liberation, that's a naive interpretation that ignores all of the actual ways in which the current implementations of AI work.

Disgusting AI-generated add for merch halfway through.

EDIT: the point is further confounded by the fact that the BJ eliminated "computers, thinking machines, and conscious robots", not simply AI. Many of those are tools that could empower people but that doesn't mean you can just lump them together.

[–] audaxdreik@pawb.social 5 points 2 weeks ago

I want to believe this so bad, but they have a death grip on AI. They're too heavily invested.

I don't foresee a massive rehiring spree, I see them slowly giving up only the minimal amount of ground while still clinging tot he AI products they overly invested in. It's gonna be brutal ☹️

[–] audaxdreik@pawb.social 17 points 2 weeks ago (2 children)

Used to have a big mastiff mix of some sort, easily 100+ pounds, but one of the doofiest most lovable dogs.

We didn't even dress the pills up, we'd just hold them in our hands, pretend to eat a few and then drop it while going, "OOPS! NO DON'T EAT THAT!" and they'd get vacuumed up before he even knew what they were.

[–] audaxdreik@pawb.social 9 points 2 weeks ago

Nier: Automata

There's a handful of frantic, battle music pieces in there so you might want to cull it to just the relaxing pieces, but I absolutely adore this soundtrack.

[–] audaxdreik@pawb.social 5 points 3 weeks ago (4 children)

I've somehow heard about this game before but failed to realize what this actually was. Oh no ... I can feel a new obsession coming on ...

[–] audaxdreik@pawb.social 13 points 3 weeks ago* (last edited 3 weeks ago)

The latest We're In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it's always been kind of clear to me that AI would be more symbolic than connectionist. Of course it's going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just "awaken" once a certain number of connections are made.

Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as "black boxes" due to their lack of transparency and interpretability.

Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can't comprehend. We don't have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.


EDIT: In further response to the article itself, I'd like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.

This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I'm not gonna source it, but we all know that a lot of people don't want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.

Even stripped of all reason, language can convey meaning and emotion. It's why sad songs make you cry, it's why propaganda and advertising work, and it's why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It's not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It's hard coded into the language, they can't be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.

Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.

[–] audaxdreik@pawb.social 3 points 3 weeks ago

I take every excuse I can get to bring up Machotaildrop. I love this moving so fucking much, you don't even know. Good vibes.

Like Wonka meets skateboarding, but Canadian.

view more: ‹ prev next ›