BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 14 points 2 weeks ago (8 children)

Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?

Officially, you can't. Unofficially, just have one of the ferrymen tow a boat.

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago (4 children)

Caught a particularly spectacular AI fuckup in the wild:

(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago

Found a banger in the comments:

[–] BlueMonday1984@awful.systems 6 points 2 weeks ago* (last edited 2 weeks ago)

Hey, remember the thing that you said would happen?

The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn't expect to be vindicated so soon afterwards.

EDIT: One of the replies gives an example for my "death of value-neutral AI" prediction too, openly calling AI "a weapon of mass destruction" and calling for its abolition.

[–] BlueMonday1984@awful.systems 9 points 2 weeks ago (1 children)

Managed to stumble across two separate attempts to protect promptfondlers' feelings from getting hurt like they deserve, titled "Shame in the machine: affective accountability and the ethics of AI" and "AI Could Have Written This: Birth of a Classist Slur in Knowledge Work".

I found both of them whilst trawling Bluesky, and they're being universally mocked like they deserve on there.

[–] BlueMonday1984@awful.systems 7 points 2 weeks ago

I don't keep track, I just put these together when I've got an interesting tangent to go on.

[–] BlueMonday1984@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Discovered some commentary from Baldur Bjarnason about this:

Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing

This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages

(Also IMO Google translate declined substantially when they integrated more LLM-based tech)

On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:

  • First, LLMs' poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid

  • Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.

By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.

[–] BlueMonday1984@awful.systems 14 points 2 weeks ago (2 children)

New Ed Zitron: The Hater's Guide To The AI Bubble

(guy truly is the Kendrick Lamar of tech, huh)

[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (1 children)

Found an archive of vibe-coding disasters recently - recommend checking it out.

[–] BlueMonday1984@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (3 children)
[–] BlueMonday1984@awful.systems 13 points 3 weeks ago* (last edited 3 weeks ago)

Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

I've got time, so I'll fire off a sidenote:

In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

 

Damn nice sneer from Charlie Warzel in this one, taking a direct shot at Silicon Valley and its AGI rhetoric.

Archive link, to get past the paywall.

 

(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)


At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.

The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.

(Sidenote: Pretty clever of them to call it Kudurru.)

I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.

Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.

How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.

 

After reading through Baldur's latest piece on how tech and the public view gen-AI, I've had some loose thoughts about how this AI bubble's gonna play out.

I don't have any particular structure to this, this is just a bunch of things I'm getting off my chest:

  1. AI's Dogshit Reputation

Past AI springs had the good fortune to have had no obvious negative externalities to sour the public's reputation (mainly because they weren't public facing, going by David Gerard).

This bubble, by comparison, has been pretty much entirely public facing, giving us, among other things:

All of these have done a lot of damage to AI's public image, to the point where its absence is an explicit selling point - damage which I expect to last for at least a decade.

When the next AI winter comes in, I'm expecting it to be particularly long and harsh - I fully believe a lot of would-be AI researchers have decided to go off and do something else, rather than risk causing or aggravating shit like this. (Missed this incomplete sentence on first draft)

  1. The Copyright Shitshow

Speaking of copyright, basically every AI company has worked under the assumption that copyright basically doesn't exist and they can yoink whatever they want without issue.

With Gen-AI being Gen-AI, getting evidence of their theft isn't particularly hard - as they're straight-up incapable of creativity, they'll puke out replicas of its training data with the right prompt.

Said training data has included, on the audio side, songs held under copyright by major music studios, and, on the visual side, movies and cartoons currently owned by the fucking Mouse..

Unsurprisingly, they're getting sued to kingdom come. If I were in their shoes, I'd probably try to convince the big firms my company's worth more alive than dead and strike some deals with them, a la OpenAI with Newscorp.

Given they seemingly believe they did nothing wrong (or at least Suno and Udio do), I expect they'll try to fight the suits, get pummeled in court, and almost certainly go bankrupt.

There's also the AI-focused COPIED act which would explicitly ban these kinds of copyright-related shenanigans - between getting bipartisan support and support from a lot of major media companies, chances are good it'll pass.

  1. Tech's Tainted Image

I feel the tech industry as a whole is gonna see its image get further tainted by this, as well - the industry's image has already been falling apart for a while, but it feels like AI's sent that decline into high gear.

When the cultural zeitgeist is doing a 180 on the fucking Luddites and is openly clamoring for AI-free shit, whilst Apple produces the tech industry's equivalent to the "face ad", its not hard to see why I feel that way.

I don't really know how things are gonna play out because of this. Taking a shot in the dark, I suspect the "tech asshole" stench Baldur mentioned is gonna be spread to the rest of the industry thanks to the AI bubble, and its gonna turn a fair number of people away from working in the industry as a result.

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

view more: ‹ prev next ›