scruiser

joined 2 years ago
[–] scruiser@awful.systems 10 points 1 week ago

Yeah it is yet another telling detail that that is the part Eliezer emphasizes and not the child rape.

[–] scruiser@awful.systems 19 points 1 week ago (6 children)

Has anyone done the math on if Elon can keep these plates spinning until he dies of old age or if it will implode sooner than that? I wouldn't think he can keep this up another decade, but I wouldn't have predicted Tesla limping along as long as it has even as Elon squeezes more money out of it, so idk. It would be really satisfying to watch Elon's empire implode, but probably he holds onto millions even if he loses billions because consequences aren't for the ultra rich in America.

[–] scruiser@awful.systems 5 points 1 week ago (4 children)

The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt

Yeah this. It would be nice if people could manage to neither dismiss the extent to which the mega rich work together nor fall into insane conspiracy theories about it.

[–] scruiser@awful.systems 5 points 2 weeks ago

I'm looking forward to the triple layered glomarization denial.

[–] scruiser@awful.systems 2 points 2 weeks ago (1 children)

It really is pathetic given the entire rationalist claim about making accurate predictions about reality and comparing predictions as the ultimate way to judge theories and models.

[–] scruiser@awful.systems 17 points 2 weeks ago (2 children)

You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. "To the best of my knowledge, I have never in my life had sex with anyone under the age of 18." So maybe he didn't know they were underage at the time?

[–] scruiser@awful.systems 6 points 3 weeks ago

To add to your sneers... lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.

I actually don't mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didn't include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they don't know how to search existing literature/research and cite it effectively.

45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored "AI safety". Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?

[–] scruiser@awful.systems 5 points 3 weeks ago

Scott Adams rant was racist enough that Scott Alexander actually calls it racist! Of course, Scott is quick to reassure the readers that he wouldn't use the r-word lightly and that he completely disagrees with "cancellation".

I also saw a lot of more irony moments where Scott Alexander fails to acknowledge or under-acknowledges his parallels with the other Scott.

But Adams is wearing a metaphorical “I AM GOING TO USE YOUR CHARITABLE INSTINCTS TO MANIPULATE YOU” t-shirt. So I’m happy to suspend charity in this case and judge him on some kind of average of his conflicting statements, or even to default to the less-advantageous one to make sure he can’t get away with it.

Yes, it is much more clever to bury your manipulations in ten thousand words of beigeness.

Overal, even with Scott going so far as to actually call Scott's rant racist and call Scott a manipulator, he is still way way too charitable to Scott.

[–] scruiser@awful.systems 17 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

TracingWoodgrains's hit piece on David Gerard (the 2024 one, not the more recent enemies list one, where David Gerard got rated above the Zizians as lesswrong's enemy) is in the top 15 for lesswrong articles from 2024, currently rated at #5! https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review

It's nice to see that with all the lesswrong content about AI safety and alignment and saving the world and human rationality and fanfiction, an article explaining about how terrible David Gerard is (for... checks notes, demanding proper valid sources about lesswrong and adjacent topics on wikipedia) won out to be voted above them! Let's keep up our support for dgerard!

[–] scruiser@awful.systems 4 points 4 weeks ago

he blogger feels like yet another person who is caught up in intersecting subcultures of bad people but can’t make herself leave. She takes a lot of deep lore like “what is Hereticon?” for granted and is still into crypto.

I missed that as I was reading, but yeah, the author has pretty progressive language, but totally fails to note all the other angles along which rational adjacent spaces are bad news, even though she is, as you note, deep enough into the space she should have seen a lot of it mask-off at this point.

[–] scruiser@awful.systems 2 points 1 month ago

I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware?

You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezer's originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.

And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4's goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesn't make sense as written.

view more: ‹ prev next ›