this post was submitted on 02 Jan 2026
246 points (97.3% liked)

Fuck AI

5167 readers
1837 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

top 27 comments
sorted by: hot top controversial new old
[–] wewbull@feddit.uk 119 points 1 week ago (5 children)

Journalists that interview LLMs need a slap. It's not an admission of wrong-doing if the LLM is the source.

[–] kadu@scribe.disroot.org 95 points 1 week ago (2 children)

Everybody that tries to normalize interactions with LLMs as if they're humans deserves public shaming.

"Interview with Winka Dinka the newest AI pop star!" should end a journalist's career, and I'm not joking.

[–] cecilkorik@piefed.ca 31 points 1 week ago

It should actually disqualify them from ever being referred to as a "journalist" ever again, unless it's specifically qualified somehow, like "failed journalist", or "fraudulent journalist". Not that there are a lot of real journalists left, but we can at least start properly labeling the ones who aren't.

[–] a_non_monotonic_function@lemmy.world 2 points 1 week ago (1 children)

Unfortunately I think it is because so many people are very gullible. I think they actually believe it.

[–] kadu@scribe.disroot.org 1 points 1 week ago

Unfortunately yes, you are correct. I have met a surprising number of people who do not understand ChatGPT is mimicking conversations, they believe it actually thinks and holds a personality of it's own.

[–] recentSlinky@lemmy.ca 17 points 1 week ago

I agree with you but since those CEOs have been all over the media saying how reliable their LLMs are, i think it makes statements by their AIs that are against them valid, according to them.

If anything, it might make them stop or slow down their push if their AIs keep criminalising them, which is a win-win. Although i doubt it, since little crybabies never learn or know how to take responsibility for their negative behaviours.

[–] shrugs@lemmy.world 11 points 1 week ago

I asked the magic 8 ball if it is always right. It said yes.

Your honor, that's the proof!

[–] takeda@lemmy.dbzer0.com 11 points 1 week ago* (last edited 1 week ago)

Holy fuck, you're not kidding. People are so fucking dumb. And illustration how journalism in US is already dead.

[–] roguetrick@lemmy.world 18 points 1 week ago

Seeing people mention dril here makes me think of someone saying "and what does FYAD have to say about this?"

[–] Tigeroovy@lemmy.ca 13 points 1 week ago (1 children)

It’s really the least of the issues in this current case but I despise how these things talk like a human, saying it feels sorry for the CSAM it made. It doesn’t feel anything, it’s not a sentient being. Stop making it speak as though those statements mean anything.

It’s honestly the most pathetic thing that people buy into this yes man buddy bullshit that LLM Chatbots are programmed to use.

If I were ever to use some kind of ai research assistant, I want it to deliver me information and information only. I hate this chummy bullshit where it sucks me off with compliments while delivering some shit it made up that sounds like something that could be a response to what I said or ask for.

Like all I want is a voice activated search engine, if that. I already didn’t use voice to text for searches before all this. But they can’t even just do that shit and actually just make their previously strong search functionality worse because of this faux digital friend bullshit.

[–] cheesybuddha@lemmy.world 3 points 1 week ago (1 children)

We needed the Star Trek Computer but we got the sycophantic talking doors from Hitchhikers Guide to the Galaxy

[–] wewbull@feddit.uk 2 points 1 week ago

Roddenberry Vs Adams

One painted a sci-fi utopia. The other showed us ourselves.

[–] Blackmist@feddit.uk 9 points 1 week ago (1 children)

I'm very sorry I generated child porn. The user wasn't even subscribed to Grok Spicy CSAM Edition.

That feature costs an extra 50 bucks a month. I will be more careful next time.

[–] hardcoreufo@lemmy.world 2 points 1 week ago

Careful to collect their money first.

[–] floquant@lemmy.dbzer0.com 9 points 1 week ago

Didn't daddy trump say no AI regulation, ever? Seems like everything is working as intended.

[–] DaTingGoBrrr@lemmy.world 6 points 1 week ago (1 children)

Is no one going to question HOW and WHY Grok knows how to generate CSAM?

This is fucking disgusting and both the user and X should be held accountable.

[–] IchNichtenLichten@lemmy.wtf 8 points 1 week ago (1 children)

I agree that it’s disgusting. To answer your question, it doesn’t know anything. It’s assigning probabilities based on its training data in order to create a response to a user prompt.

[–] DaTingGoBrrr@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (3 children)

Yes I know that it doesn't "know" anything, but where did it get that training data and why does it have CSAM training data? Or does it just generate regular porn and add a kid face?

[–] IchNichtenLichten@lemmy.wtf 6 points 1 week ago

These companies tend to not say how they train their models, partially because much of it is stolen but the data is of pretty much everything. The LLM will generate a response to any prompt so if it can be used to put a celebrity in lingerie, it can also be used to do the same with a child. Of course there are guardrails, but they’re weak and I hope X gets sued into oblivion.

[–] HK65@sopuli.xyz 3 points 1 week ago

There are two answers to that, both equally valid.

One is that it extrapolates based on knowing what a naked adult looks like compared to a clothed adult, and how a child looks like compared to an adult, it can "add those vectors" and figure out how a naked child looks like.

The other is that one of the biggest porn datasets that most of these will have in their training data has recently been taken down because it had a bunch of CSAM in it. Ironically, how it happened was that an independent guy uploaded it to Google Cloud, and Google flagged and banned the guy for it.

The dataset would not have been taken down if it wasn't for the guy doing the rounds afterwards though. Google didn't care beyond banning a user.

[–] Dequei@piefed.social 2 points 1 week ago

I just get a 403 because I am using a VPN...

[–] BoycottTwitter@lemmy.zip 2 points 1 week ago (1 children)

We must not be silent. Speak out against this and urge as many people as possible to boycott Twitter and xAI. Keep spreading the word: Elon is a pedo.

[–] drunkpostdisaster@lemmy.world 1 points 1 week ago (1 children)

That wont do shit. AI is just going to spread to other platforms one way or the other and shit like this will keep happening. Musk is not the only techfacist out there.

[–] Quill7513@slrpnk.net 1 points 1 week ago

then help onboard people to the fediverse