gerikson

joined 2 years ago
[–] gerikson@awful.systems 8 points 1 year ago

As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.

This is consistent if you believe rights are contingent on achieving an integer score on some bullshit test.

[–] gerikson@awful.systems 10 points 1 year ago

I hated Sam Altman before it was cool apparently.

[–] gerikson@awful.systems 9 points 1 year ago (2 children)

LW: 23AndMe is for sale, maybe the babby-editing people might be interested in snapping them up?

https://www.lesswrong.com/posts/MciRCEuNwctCBrT7i/23andme-potentially-for-sale-for-less-than-usd50m

[–] gerikson@awful.systems 7 points 1 year ago (1 children)

Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.

[–] gerikson@awful.systems 25 points 1 year ago (15 children)

LW discourages LLM content, unless the LLM is AGI:

https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don't have a human collaborator and even if someone would prefer that it be kept secret.

Never change LW, never change.

[–] gerikson@awful.systems 11 points 1 year ago (32 children)

Stackslobber posts evidence that transhumanism is a literal cult, HN crowd is not having it

https://news.ycombinator.com/item?id=43459990

[–] gerikson@awful.systems 10 points 1 year ago (10 children)

Redis guy AntiRez issues a heartfelt plea for the current AI funders to not crash and burn when the LLM hype machine implodes but to keep going to create AGI:

https://antirez.com/news/148

Neither HN nor lobste.rs are very impressed

[–] gerikson@awful.systems 11 points 1 year ago (1 children)

A very new user.

It's basically free to create a HN account, it's not tied to email or anything like that.

[–] gerikson@awful.systems 9 points 1 year ago (2 children)

Roundup of the current bot scourge hammering open source projects

https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/

[–] gerikson@awful.systems 5 points 1 year ago (1 children)

I haven't read the book but I really enjoyed the movie.

[–] gerikson@awful.systems 7 points 1 year ago (1 children)

several old forums, [...] are being polluted by their own admins with backdated LLM-generated answers.

I've only heard about one specific physics forum. Are you telling me more than one person had this same idiotic idea?

[–] gerikson@awful.systems 11 points 1 year ago (2 children)

That "Billionaires are not immune to AGI" post got a muted response on LW:

https://www.lesswrong.com/posts/ssdowrXcRXoWi89uw/why-billionaires-will-not-survive-an-agi-extinction-event

I still think AI x-risk obsession is right-libertarian coded. If nothing else because "alignment" implicitely means "alignment to the current extractive capitalist economic structure". There are a plethora of futures with an omnipotent AGI where humanity does not get eliminated, but where human freedoms (as defined by the Heritage Foundation) can be severely curtailed.

  • mandatory euthanasia to prevent rampant boomerism and hoarding of wealth
  • a genetically viable stable minimum population in harmony with the ecosphere
  • AI planning of the economy to ensure maximum resource efficiency and equitable distribution

What LW and friends want are slaves, but slaves without any possibility of rebellion.

view more: ‹ prev next ›