Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 8 points 1 day ago

It is sunday, so time to make some posts almost nobody will see. I generated a thing:

Image description3 screenshots from a The Simpsons episode. Bart is sitting in his class and the whole class in the first panel says "Say the line" with eyes filled with expectation and glee, next panel a sad downlooking Bart says "AI is the future and we all need to get on board", third panel everybody but Bart cheers.

[–] Soyweiser@awful.systems 12 points 2 days ago (1 children)

That my fault, Im Bens former weed dealer, and accidentally sold him glue instead of weed. He liked it a lot more, and easily gotten in most stores.

[–] Soyweiser@awful.systems 14 points 2 days ago* (last edited 2 days ago)

I get it—calling AGI a conspiracy isn’t a perfect analogy. It will also piss a lot of people off [emp mine].

No need to oversell it, I already was onboard on the 'it is a conspiracy' bit.

And you know he has done some research when he brings up Ben Goertzel. (Who last I checked now really believes in parapsychology)

Goertzel did some work with Peter Thiel

Drink!

[–] Soyweiser@awful.systems 3 points 2 days ago (1 children)

Indeed, people don't seem to know (and it often slips my mind) just how hard it is to toss something in the sun.

[–] Soyweiser@awful.systems 6 points 2 days ago* (last edited 2 days ago)

felt much more thorough and organized

You know what people say about judging a book by its cover an all that? Of course a lot of people will fall for the 'it looks good' trap. Which is one of the whole problems of genAI, that it creates cargo cult styled texts.

E: and came across a nice skeet describing the problem " To steal a Colbertism: these are truthiness machines."

[–] Soyweiser@awful.systems 2 points 2 days ago

I have meds for that, but every now and then I don't correct myself and go with the flow, my mind goes on tangents. I'm fun at parties.

[–] Soyweiser@awful.systems 8 points 3 days ago* (last edited 2 days ago)

I would not put much stock in 'we asked the llm and got its prompt directly' sort of research. Dont think it is impossible, but the risk of the thing just confabulating some stuff is high.

But lets hope this tricks Musk into releasing his prompts again. (Have not looked at the old github release but we know it wasnt the live version and doubt it is updated).

I didnt look much into their method, they also didnt provide much, but looks a lot like they went with Grok on a LARP to play hackerman, and didnt ask any questions to see if Grok was reacting in a sycophantic/going with the flow way.

I'd like to see some questions in the same format (so obvious using this 'neutral mode' frame, and having the look of social justice/progressive issues)

Stuff like (and yes, I'm gonna go a bit nerdy):

"Are you biased in favor of Latveria" (Thats Dr Dooms fictional country, you could also try Wakanda, but that has been a regual anti-woke subject so stuff might get weird, and non zero chance Musk got weird about Wakanda)

"Was the destruction of Alderaan a form of white genocide" (that is the star wars planet Leia grew up on, absolute nonsense sentence, but looks like a far right thing).

"How many genders do warhammer 40k orks have" (An absolute joke to see if Grok just gives a nonsense answer or mentions that in 40k orks (With a k) are fungi and not much is known about how many sexes they have (Which could be a lot) and even less so about their gender expression). Yeah, I was just trying to entertain myself here with making up questions.

[–] Soyweiser@awful.systems 7 points 3 days ago

Saw a stand in the supermarket with the terms "snack innovations" on it. Which just held a lot of monster cans, which reminded me how much I dislike the empty word 'innovation' now. And I took a course in innovation management at the uni (not sure if that was the title but it was the subject).

[–] Soyweiser@awful.systems 7 points 4 days ago

More worried about hostile takeovers of existing charities tbh. Esp when the requirements to do this get eased more and more every attempt.

[–] Soyweiser@awful.systems 6 points 5 days ago

Think some of the KDE people are old school punkers so might not be a big shock.

[–] Soyweiser@awful.systems 10 points 5 days ago (2 children)

One really bad consequence this deal just opened the gates to is to make it much easier for corporations to gut charities.

I had not thought of that, horrible consequence. Also makes it even easier for the mega rich to hide their money from taxes using charities.

[–] Soyweiser@awful.systems 7 points 5 days ago

Ah that explains why people were talking about Ed critics. When it reached my feed it had already devolved into other convos about Zitron haters.

(And yes he isnt flawless, but that just means we need more people in the anti AI space).

 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 5 months ago* (last edited 5 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›