self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 6 points 4 weeks ago

no fucking thanks

[–] self@awful.systems 10 points 4 weeks ago

oh absolutely, fuck graber and fuck, fuuuuuuck gaiman to hell. i don’t have an inch of trust for either of them.

tho I will say that even here on lemmy, even if it didn’t reach the awfulness of what i quoted, i’ve seen a bunch of clanker memes that were seriously iffy… I wouldn’t qualify those of “serious discussions”

I agree with all of this

but they still matter in the broader ai discourse

and disagree strongly with this. part of the mission of TechTakes and SneerClub is that they must remain a space where marginalized people are welcome, and we know from prior experience that the only way to guarantee that is to ensure that bigots and boosters (and sometimes they’re absolutely the same person — LLMs are fashtech after all) can’t ever control the discourse. I know through association that a lot of moderated AI-critical spaces, writers, and researchers follow a similar mission.

now, unmoderated and ineffectively moderated spaces are absolutely vulnerable to being tuned into fascist pipelines, and inventing slurs is one way they do it (see also “waffles” quickly being picked up as an anti-trans slur on bluesky, which has moderation that’s hostile to its marginalized userbase). if that’s something that’s happening in a popular community and there’s enough examples to show a pattern, then I’d love to have it as a post in TechTakes or as a blog link we can share around the AI-critical community as a warning.

[–] self@awful.systems 5 points 4 weeks ago

you’ve never posted on our instance before as far as I can tell and I’m pretty sure I didn’t ask you to fucking gatekeep one of our threads and start a shitty little fight that I have to clean up

[–] self@awful.systems 11 points 4 weeks ago* (last edited 4 weeks ago) (4 children)

in every serious (ie not TikTok or any other right-wing or unmoderated hellhole) anti-AI community I know, bigotry gets you banned even if you’re trying to hide it behind nonsense words like a 12 year old

meanwhile the people who seem to have dreamt up the idea that AI critical spaces are full of masked bigotry appear to be mostly ~~Neil Gaiman~~ Warren Ellis (see replies), who has several credible sexual assault allegations leveled against him, and Jay Graber, bluesky’s CEO who deserves no introduction (search mastodon or take a look at bluesky right now if the name’s unfamiliar). I don’t trust either of those people’s judgement as to what harms marginalized people.

[–] self@awful.systems 7 points 1 month ago (1 children)

that’s a very good post — framing fascism as a supply chain issue helps push back against the rhetoric that software is somehow apolitical by nature, and it’s a good response to the excuses given for the RubyGems takeover

[–] self@awful.systems 6 points 1 month ago (2 children)

the first post is the ladybird guy personally welcoming one of the main hyprland transphobes to browser development because he landed a PR in ladybird

Brendan Eich is the Brave guy but he was also Mozilla’s CEO (and he’s the guy who invented JavaScript and made it fucking suck) when he got caught making a donation to an anti-LGBT organization and got asked to step down. since then he’s used Brave as a vehicle for cryptocurrencies and LLMs.

Proton’s CEO expressed support for the Republican Party in the US in a public post, and then most likely paid for some shithead to write a medium article unconvincingly arguing that he did nothing of the sort; both the original post and the shitty medium article have been discussed before in the Stubsack.

lunduke is a far-right crank with a long history who’s currently ranting on Twitter about how open source projects need to eliminate all the wokes; FUTO has done free marketing for ladybird in the past and is now doing the same for lunduke

[–] self@awful.systems 4 points 1 month ago

god I hope not, I take sanity damage every time I deep dive these fuckers

[–] self@awful.systems 10 points 1 month ago (1 children)

yep, see this recent post for receipts (in the answer key) — they’re not even close to not being fash, though they’re still denying it in ways that don’t hold any water at all

[–] self@awful.systems 14 points 1 month ago (1 children)

I accidentally omitted the receipt for suckless so check that out in the answer key if you didn’t see it in there before

[–] self@awful.systems 12 points 1 month ago (9 children)

here’s a good summary of the situation including an analysis of the brand new dogwhistle a bunch of bluesky posters are throwing around in support of Jay and Singal and layers of other nasty shit

here’s Jay Graber, CEO of Bluesky, getting called out by lowtax’s ex-wife:

here’s Jay posting about mangosteen (mangosteen juice was a weird MLM lowtax tried to market on the Something Awful forums as he started to spiral)

Anyone want to hazard a guess at a timeline?

since Jay posted AI generated art about dec/acc and put the term in her profile, her little “ironic” nod to e/acc and to AI, my guess is this is coming very soon

[–] self@awful.systems 2 points 1 month ago

thank you! I adapted the post I mentioned and the surrounding thread into a TechTakes post

[–] self@awful.systems 7 points 1 month ago (1 children)

jeff’s follow-up after the backlash clarifies: you wouldn’t know her because he donated right under the limit to incur a taxable event and didn’t establish a trust like a normal millionaire and also the LLM printout only came pointlessly after months of research and financially supporting the unhoused friend and also you’re no longer allowed to ask publicly about the person he brought up in public, take it to email

 

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

 

this is a computer that’s almost entirely without graphical capabilities, so here’s a demo featuring animations and sound someone did last year

 

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

 

 

A Brief Primer on Technofascism

Introduction

It has become increasingly obvious that some of the most prominent and monied people and projects in the tech industry intend to implement many of the same features and pursue the same goals that are described in Umberto Eco’s Ur-Fascism(4); that is, these people are fascists and their projects enable fascist goals. However, it has become equally obvious that those fascist goals are being pursued using a set of methods and pathways that are unique to the tech industry, and which appear to be uniquely crafted to force both Silicon Valley corporations and the venture capital sphere to embrace fascist values. The name that fits this particular strain of fascism the best is technofascism (with thanks to @future_synthetic), frequently shortened for convenience to techfash.

Some prime examples of technofascist methods in action exist in cryptocurrency projects, generative AI, large language models, and a particular early example of technofascism named Urbit. There are many more examples of technofascist methods, but these were picked because they clearly demonstrate what outwardly separates technofascism from ordinary hype and marketing.

The Unique Mechanisms of Technofascism

Disassociation with technological progress or success

Technofascist projects are almost always entirely unsuccessful at achieving their stated goals, and rarely involve any actual technological innovation. This is because the marketed goals of these projects are not their real, fascist aims.

Cryptocurrencies like Bitcoin are frequently presented as innovative, but all blockchain-based technologies are, in fact, inefficient distributed database based on Merkle trees, a very old technology which blockchains add little practical value to. In fact, blockchains are so impractical that they have provably failed to achieve any of the marketed goals undertaken by cryptocurrency corporations since the public release of Bitcoin(6).

Statement of world-changing goals, to be achieved without consent

Technofascist goals are never small-scale. Successful tech projects are usually narrowly focused in order to limit their scope(9), but technofascist projects invariably have global ambitions (with no real attempt to establish a roadmap of humbler goals), and equally invariably attempt to achieve those goals without the consent of anyone outside of the project, usually via coercion.

This type of coercion and consent violation is best demonstrated by example. In cryptocurrency, a line of thought that has been called the Bitcoin Citadel(8) has become common in several communities centered around Bitcoin, Ethereum, and other cryptocurrencies. Generally speaking, this is the idea that in a near-future post-collapse society, the early adopters of the cryptocurrency at hand will rule, while late and non-adopters will be enslaved. In keeping with technofascism’s disdain for the success of its marketed goals, this monstrous idea ignores the fact that cryptocurrencies would be useless in a post-collapse environment with a fractured or non-existent global computer network.

AI and TESCREAL groups demonstrate this same pattern by simultaneously positioning large language models as an existential threat on the verge of becoming a hostile godlike sentience, as well as the key to unlocking a brighter (see: more profitable) future for the faithful of the TESCREAL in-group. In this case, the consent violation is exacerbated by large language models and generative AI necessarily being trained on mass volumes of textual and artistic work taken without permission(1).

Urbit positions itself as the inevitable future of networked computing, but its admitted goal is to technologically implement a neofeudal structure where early adopters get significant control over the network and how it executes code(3, 12).

Creation and furtherance of a death cult

In the fascist ideology described by Eco, fascism is described as “a life lived for struggle” where everyone is indoctrinated to believe in a cult of heroism that is closely linked with a cult of death(4). This same indoctrination is common in what I will refer to as a death cult, where a technofascist project is simultaneously positioned as both a world-ending problem, and the solution to that same problem (which would not exist without the efforts of technofascists) for a select, enlightened few.

The death cult of technofascism is demonstrated with perfect clarity by the closely-related ideologies surrounding Large Language Models (LLMs), Artificial General Intelligence (AGI), and the bundle of ideas known as TESCREAL (Transhumanism, Extropianism, Singulartarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism)(5).

We can derive examples of this death cult from the examples given in the previous section. In the concept of the Bitcoin Citadel, cryptocurrencies are idealized as both the cause of the collapse and as the in-group’s source of power after that collapse(6). The TESCREAL belief that Artificial General Intelligence (AGI) will end the world unless it is “aligned with humanity” by members of the death cult, who handle the AGI with the proper religious fervor(11).

While Urbit does not technologically structure itself as a death cult, its community and network is structured to be a highly effective incubator for other death cults(2, 7, 10).

Severance of our relationship with truth and scientific research

Destruction and redefinition of historical records

This can be viewed as a furtherance of technofascism’s goal of destroying our ability to perceive the truth, but it must be called out that technofascist projects have a particular interest in distorting our remembrance of history; to make history effectively mutable in order to cover for technofascism’s failings.

Parasitization of existing terminology

As part of the process of generating false consensus and covering for the many failings of technofascist projects, existing terminology is often taken and repurposed to suit the goals of the fascists.

One obvious example is the popular term crypto, which until relatively recently referred to cryptography, an extremely important branch of mathematics. Cryptocurrency communities have now adopted the term, and have deliberately used the resulting confusion to falsely imply that cryptocurrencies, like cryptography, are an important tool in software architecture.

Weaponization of open source and the commons

One of the distinctive traits that separates ordinary capitalist exploitation from technofascism is the subversion and weaponization of the efforts of the open source community and the development commons.

One notable weapon used by many technofascist projects to achieve absolute control while maintaining the illusion that the work being undertaken is an open source community effort is what I will call forking hostility. This is a concerted effort to make forking the project infeasible, and it takes two forms.

Its technological form is accomplished via network effects; good examples are large cryptocurrency projects like Bitcoin and Ethereum, which cannot practically be forked because any blockchain without majority consensus is highly vulnerable to attacks, and in any case is much less valuable than the larger chain. Urbit maintains technological forking hostility via its aforementioned implementation of neofeudal network resource allocation.

The second form of forking hostility is social; technofascist open source communities are notably for extremely aggressively telling dissenters to “just for it, it’s open source” while just as aggressively punishing anyone attempting a fork with threats, hacking attempts (such as the aforementioned blockchain attacks), ostracization, and other severe social repercussions. These responses are very distinctive in the uniformity of their response, which is rarely seen even among the most toxic of regular open source communities.

Implementation of racist, biased, and prejudiced systems

References

[1] Bender, Emily M. and Hanna, Alex, Ai Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, 2023.

[2] Broderick, Ryan, Inside Remilia Corporation, the Anti-Woke Dao behind the Doomed Milady Maker Nft, Fast Company, 2022.

[3] Duesterberg, James, Among the Reality Entrepreneurs, The Point Magazine, 2022.

[4] Eco, Umberto, Ur-Fascism, The Anarchist Library, 1995.

[5] Gebru, Timnit and Torres, Emile, Satml 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through Agi, 2023.

[6] Gerard, David, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Etherium and Smart Contracts, {David Gerard}, 2017.

[7] Gottsegen, Will, Everything You Always Wanted to Know about Miladys but Were Afraid to Ask, 2022.

[8] Munster, Decrypt / Ben, The Bizarre Rise of the ’Bitcoin Citadel’, Decrypt, 2021.

[9] , Scope Creep, Wikipedia, 2023.

[10] , How to Start a Secret Society, 2022.

[11] Torres, Emile P., The Acronym behind Our Wildest Ai Dreams and Nightmares, Truthdig, 2023.

[12] Yarvin, Curtis, 3-Intro.Txt, GitHub, 2010.

2
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems
 

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

 

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

 

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

 

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

 

I defederated us from two lemmy instances:

  • exploding-heads: transphobia
  • basedcount: finally I get to ban most of r/PoliticalCompassMemes in one go
 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

 

RationalWiki is a highly biased cancel community which has attacked people like Scott Aaronson and Scott Alexander before.

Background on the authors according to a far-left website.

Let's at least be honest.

That is profiling work. (Not just "Ad hominem".)

The clash with the name "rational-wiki" is too strong not to be noted.

as the infrastructure admin of a highly biased far-left cancel community that attacks people like Scott Aaronson and Scott Alexander: mmm delicious

for bonus sneers, see the entire rest of the thread for the orange site’s ideas on why they don’t need therapy:

I was about to start psychotherapy last month, I ask my family's friend therapist If he could recommend me where to go. So he interviewed me for about 30 mins and ask me about all my problems.

A week later he send me the number of the therapist. I didnt write her yet, I think I dont need it as badly as before.

Those 30 mins were key. I am highly introspective and logical, I only needed to orderly speak my problems.

to quote Key & Peele: motherfucker, that’s called a job

view more: ‹ prev next ›