YourNetworkIsHaunted

joined 2 years ago

More details here.

Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?

The classic 40k catch-22: either it doesn't do what you're claiming it does, in which case you're a heretic lying to the inquisition OR it does and you're summoning the spirits of the dead like a necromancer heretic.

[–] YourNetworkIsHaunted@awful.systems 7 points 1 day ago* (last edited 1 day ago)

Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely can't go wrong. I mean they limited it to only so so when the users asked nicely!

Edit to add:

The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI "FOOM" than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systems' vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldn't assume they actually expect to build that much), the people behind these systems continue to focus their efforts on "how do we prevent skynet" over any of it.

Thinking in the context of Charlie Stross' old talk about corporations as "slow AI," I wonder if some of the concern comes either explicitly or implicitly from an awareness that "keep growing and consuming more resources until there's nothing left for anything else, including human survival" isn't actually a deviation from how these organizations are building these systems. It's just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of "please don't murder everyone or ignore the instructions to not murder everyone" to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesn't appear likely to happen unless something forces it to.

[–] YourNetworkIsHaunted@awful.systems 7 points 1 day ago (1 children)

The grand irony is I'm not even sure most people click on or read this sort of stuff. I don't think it's often even created to be read by anyone. I think it's created as a sort of swaddling fan fiction for MBAs, advertisers, event sponsors and sources, so they can tune out ethical quibbles and feel good about how clever they are.

Every time someone hypes up Steve Jobs' "reality distortion field" this is what they're actually talking about whether they realize it or not.

I was sufficiently interested based off of this that I tracked down a few others of his. This one felt like a good take for an era where these things are being used for more than just slop generation despite the underlying flaws not being resolved.

It felt very much like the devil's bargain of online media. Like, you can have your prestige journalism as a treat, but only if the slop flows fast enough that we need clout more than eyeballs.

Ironically I think it's also been discussed most frequently within Rationalist circles that these types of intelligence aren't often correlated. I'm not going to chase down links right now because doing an SSC archive exploration requires more mental fortitude than I currently possess, but I distinctly remember that a recurring theme was "if nerds are so smart why don't they rule the world?" In my less cynical days I had assumed that his confusion on this point was largely rhetorical, intended to illustrate some part of whatever point was buried in the beigeness. Now it seems like I was falling victim to the ability to project whatever tangentially-related thesis you want onto the essay and find supporting arguments because of how badly it's written.

Fuck it, I'm good to Gonch this out.

I forget, did we ever actually learn who the killer was in Murder at Wizard University? I remember it kept coming up through the first book as a kind of motif for how this new world wasn't necessarily as safe and clean as Tommy expected, but I think that whole business with the Thoughtknot ended up overshadowing it before the actual killer was revealed. Like, I get it thematically or whatever but it just stuck in my head as a loose thread and has bugged me for years.

[–] YourNetworkIsHaunted@awful.systems 10 points 3 days ago (2 children)

On a purely rhetorical point, it seems like the whole counterargument from Gwern is just an argument-by-disorganization or something to that effect. He doesn't actually challenge the factual information presented, but does shift how those facts are framed and what the actual contention is in the background, and then avoids actually engaging with the new contention from the bottom up.

In a lot of discussions with singularity cultists (both pros and antis) they assume that a true superintelligence would render the whole universe deterministically predictable to a sufficient degree to allow it to basically do magic. This is how the specifics of "how and why does the AI kill all humans again?' tend to be elided, for example. This same kind of thinking is also at the heart of their obsession with "superpredictors" who can, it is assumed, use some kind of trick to beat this kind of mathematical limit in certainty (this is the part where I say something about survivorship bias). In the context of that discussion, the fact that a relatively simple arrangement of components following relatively simple, deterministic rules is still not meaningfully predictable past a dozen or so sequential events due to the magnification of the inevitable error in our understanding of the initial circumstances is a logical knockout.

Rather than engage with this, however, Gwern and his compatriots in the thread focus in on the tangent about how high-level pinball players are able to control for that uncertainty by avoiding the region of the board where those error-magnifying parts are. However this is not the same argument and begs the question of whether those high-chaos areas are always avoidable as they are in a pinball machine. Rather than engage with that question, Gwern doubles down on the pinball analogy, shifting the question even further from "how well can we predict the deterministic motion of a ball given the inevitable uncertainty of our initial state" to "how many ways can we convince a third party we've gotten a high score on a pinball machine". At this point we're not just moving the goalposts, we've moved the entire stadium into low earth orbit and gotten real cute about whether we're playing 🏈 or ⚽ football.

And given the conversation surrounding the thread and these topics on LW I'm not even going to assume that such a wild shift is the result of bad faith instead of simple disorganization and sloppiness of rhetoric. This is what happens to a community that conflates "it makes me feel smart" with "it actually communicates the point effectively".

[–] YourNetworkIsHaunted@awful.systems 8 points 3 days ago* (last edited 3 days ago) (2 children)

A) At this point I would be more surprised to learn that AI psychosis wasn't infecting the upper tiers of the white house tbh. Like, at this point we could get a leak that Hegseth had been developing a literal god complex alongside his LLM mistress and I wouldn't bat an eye.

B) It seems like a particularly bad sign that this is coming from thr Saudis given that they've been a consistent ally that the US has spent a lot of material resources and political capital to support. Ed: not actually an official Saudi government source. When you assume you make an ass of yourself, etc.

[–] YourNetworkIsHaunted@awful.systems 4 points 3 days ago (2 children)

I mean it looks kinda swastikesque imo, especially with the ambiguity over whether it's supposed to be one or two "I"s behind it. (In some cases it's FII with the second I split, and sometimes it's FIIInstitute with the top of the second and bottom of the third "I" visible).

I doubt they have the individual or institutional capacity to go after them in a timely and competent fashion, but there's plenty of time before August for someone to remind them about it, especially since this was a way for Anthropic and friends to reclaim some positive space in the news cycle. I can see some bad news for the bubble and/or war hitting in, say, June and causing Amodei to break out the "we stood up to trump" story again, which will in turn remind the dodderer-in-chief that they were gonna try and do something about that guy.

 

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

 

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›