Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 30 points 1 week ago* (last edited 1 week ago) (5 children)

It feels more like a toy project that snowballed fueled by ideology and get-rich-quick schemes.

[–] Architeuthis@awful.systems 12 points 1 week ago* (last edited 1 week ago) (3 children)

How do these people delude themselves into thinking that the dogshit they’re eating is good?

They think it's just that they're early, like they did with bitcoin. Maybe in six monthsthe dogshit will start to taste great, who's to say, and so on and so forth.

Also swengs in the USA often make absurdly more than 1K/week.

[–] Architeuthis@awful.systems 15 points 1 week ago (10 children)

The common clay of the new west:

transcriptionTwitter post from @BenjaminDEKR

“OpenClaw is interesting, but will also drain your wallet if you aren't careful. Last night around midnight I loaded my Anthropic API account with $20, then went to bed. When I woke up, my Anthropic balance was $O. Opus was checking "is it daytime yet?" every 30 minutes, paying $0.75 each time to conclude "no, it's still night." Doing literally nothing, OpenClaw spent the entire balance. How? The "Heartbeat" cron job, even though literally the only thing I had going was one silly reminder, ("remind me tomorrow to get milk")”

Continuation of twitter post

“1. Sent ~120,000 tokens of context to Opus 4.5 2. Opus read HEARTBEAT md, thought about reminders 3. Replied "HEARTBEAT_OK" 4. Cost: ~$0.75 per heartbeat (cache writes) The damage:

  • Overnight = ~25+ heartbeats
  • 25 × $0.75 = ~$18.75 just from heartbeats alone
  • Plus regular conversation = ~$20 total The absurdity: Opus was essentially checking "is it daytime yet?" every 30 minutes, paying $0.75 each time to conclude "no, it's still night." The problem is:
  1. Heartbeat uses Opus (most expensive model) for a trivial check
  2. Sends the entire conversation context (~120k tokens) each time
  3. Runs every 30 minutes regardless of whether anything needs checking That's $750 a month if this runs, to occasionally remind me stuff? Yeah, no. Not great.”
[–] Architeuthis@awful.systems 13 points 1 week ago (6 children)

Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation

you see, the problem with Epstein was that maybe he made some sparkly elites feel unsafe after the fact, you see. maybe all those fourteen year olds should have just known better, dontcha think?

[–] Architeuthis@awful.systems 11 points 1 week ago* (last edited 1 week ago) (1 children)

Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation

Either deliberately whitewashing the situation or completely missing the point of why people are mad at Epstein, Yud really can't help himself.

edit: Or depending on the timeline and the fact that 'prison time for soliciting a 14 year old' was on top of Epstein's wiki as early a 2016 he's explicitly saying they didn't mind that part with 300k on the line.

[–] Architeuthis@awful.systems 10 points 1 week ago

It's possible it just means the responses aren't vetted by a lawyer, and will be revised as neccessary.

[–] Architeuthis@awful.systems 4 points 2 weeks ago* (last edited 2 weeks ago)

I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.

He'll probably do this by running an agent that uses a chatbot with the playwright mcp to occasionally scrape the site, then feed that to a second agent who'll filter the posts for suspect behavior, then to another agent to summarize and create a report, then another agent which decides if the report is worth it for him to read and message him through his socials. Maybe another agent with db access to log the flagged posts at some point.

All this will be worth it to no one except the bot vendors.

[–] Architeuthis@awful.systems 4 points 2 weeks ago (1 children)

Yeah, he's the guy who wrote the Harry Potter fanfic

[–] Architeuthis@awful.systems 5 points 2 weeks ago (3 children)

How did molt become a term of endearment for agents? I read in the pivot thread that clawdbot changed it's name to moltbot because anthropic got ornery.

[–] Architeuthis@awful.systems 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Rationalism, among other things, is supposed to cure you of being a piece of shit, in fact it's such a flawless epistemic hack that it's a common belief among them that it's impossible for two sufficiently rationalist individuals to disagree, as in come to different conclusions about the same assumptions. So if you think some rat influencer is full of shit, it could in fact be you that hasn't yet attained the appropriate level of ~~thetans~~ rationalism.

So yeah they'll just have the superbabies read the sequences and Harry Potter fan fiction.

Also, any talk of alignment should be seen in light of them being mostly ok with humanity going poof by this time next week, as long as the stage is set for whatever technological facsimile of consciousness they deem reasonably human-like to inherit the cosmos.

[–] Architeuthis@awful.systems 11 points 2 weeks ago* (last edited 2 weeks ago)

It's so blindingly obvious that it's become obscure again so it bears pointing out, someone really went ahead and named a tech company after a fantasy torment nexus and people thought it wouldn't be sketch.

[–] Architeuthis@awful.systems 9 points 2 weeks ago

A religion is just a cult that survived its founder -- someone, at some point.

 

edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

 

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

view more: ‹ prev next ›