this post was submitted on 20 Jan 2026
814 points (99.0% liked)

Fuck AI

5755 readers
1028 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Lumidaub@feddit.org 204 points 3 weeks ago (4 children)

Seeing as OpenAI struggled to make its AI avoid the em dash and still hasn't entirely managed to do it, I'm not too worried.

[–] FiniteBanjo@feddit.online 96 points 3 weeks ago (1 children)

TBF OpenAI are a bunch of idiots running the world's largest ponzi scheme. If DeepMind tried it and failed then...

Well I still wouldn't be surprised, but at least it would be worth citing.

[–] chickenf622@sh.itjust.works 41 points 3 weeks ago (5 children)

I think the inherit issue is the current "AI" is inherently non-deterministic, so it's impossible to fix these issues totally. You can feed am AI all the data on how to not sound AI, but you need massive amounts of non-AI writing to reinforce that. With AI being so prevalent nowadays you can't guarantee a dataset nowadays is AI free, so you get the old "garbage in garbage out" problem that AI companies cannot solve. I still think generative AI has it's place as a tool, I use it for quick and dirty text manipulation, but it's being applied to every problem we have like it's a magic silver bullet. I'm ranting at this point and I'm going to stop here.

[–] FiniteBanjo@feddit.online 26 points 3 weeks ago (22 children)

I honestly disagree that it has any use. Being a statistical model with high variance makes it a liability, no matter which task you use it for will produce worse results than a human being and will create new problems that didn't exist before.

load more comments (22 replies)
load more comments (4 replies)
load more comments (3 replies)
[–] Phoenix3875@lemmy.world 80 points 3 weeks ago (1 children)

You do understand this is more akin to white hat testing, right?

Those who want to exploit this will do it anyway, except they won't publish the result. By making the exploit public, the risk will be known if not mitigated.

[–] unepelle@mander.xyz 23 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I'm admittedly not knowledgeable in White Hat Hacking, but are you supposed to publicize the vulnerability, release a shortcut to exploit it telling people to 'enjoy', or even call the vulnerability handy ?

[–] teft@piefed.social 13 points 3 weeks ago (1 children)

Responsible disclosure is what a white hat does. You report the bug to whomever is the party responsible for patching and give them time to fix it.

[–] PlexSheep 11 points 3 weeks ago

That sort of depends on the situation. Responsible disclosure is for if there is some relevant security hole that is an actual risk to businesses and people, while this here is just "haha look LLMs can now better pretend to write good text if you tell it to". That's not really responsible disclosurable. It's not even specific to one singular product.

load more comments (1 replies)
[–] dumbass@piefed.social 64 points 3 weeks ago (4 children)

Wikipedia is one of the last genuine places on the Internet, and these rat bastards are trying to contaminate that, too

Wikipedia just sold the rights to use Wikipedia for AI training to Microsoft and openai....

[–] ATPA9@feddit.org 110 points 3 weeks ago (3 children)

It's getting scraped anyway. So why not get some money from it?

[–] Fedizen@lemmy.world 51 points 3 weeks ago

Imo this. Selling access also implies its illegal to access without purchasing rights which imho helps undermine AI's only monetary advantage

[–] MBM@lemmings.world 25 points 3 weeks ago (2 children)

They lose the right to sue them

[–] Corkyskog@sh.itjust.works 13 points 3 weeks ago

They probably realized that it was a losing battle and they didn't want to pay legal fees.

load more comments (1 replies)
load more comments (1 replies)
[–] udon@lemmy.world 15 points 3 weeks ago (2 children)

How exactly does that work? Wikipedia does not "own" the content on the website, it's all CC-BY licensed.

[–] WillowBe@lemmy.blahaj.zone 19 points 3 weeks ago (1 children)

The BY term is not respected by LLMs

load more comments (1 replies)
load more comments (1 replies)
[–] Alcoholicorn@mander.xyz 9 points 3 weeks ago (15 children)

Why? Wikipedia has like a decade of operating expenses on hand, so they don't need the money

[–] surewhynotlem@lemmy.world 36 points 3 weeks ago

This number inflates every time I read it. First it was ten years of hosting cost. Then it's operating costs. Soon it will be ten years of the entire US GDP.

I'd believe they have ten years of hosting costs on hand.

My quick googling says they have 170m in assets and all 180m in annual operating costs. Give or take.

load more comments (14 replies)
load more comments (1 replies)
[–] Kaz@lemmy.org 62 points 3 weeks ago (1 children)

These fuckin AI "enthusiasts" are just making the rest of the world hate AI more.

Losers who cant achieve anything without AI are just going to keep doing this shit.

load more comments (1 replies)
[–] DFX4509B@lemmy.wtf 53 points 3 weeks ago

Download an offline copy while you still can.

[–] Jayjader@jlai.lu 51 points 3 weeks ago (3 children)

I really despise how Claude's creators and users are turning the definition of "skill" from "the ability to use [learned] knowledge to enhance execution" into "a blurb of text that [usefully] constrains a next-token-predictor".

I guess, if you squint, it's akin to how biologists will talk about species "evolving to fit a niche" amongst themselves or how physicists will talk about nature "abhorring a vacuum". At least they aren't talking about a fucking product that benefits from hype to get sold.

[–] prole@lemmy.blahaj.zone 39 points 3 weeks ago (3 children)

I can't help but get secondhand embarrassment whenever I see someone unironically call themselves a "prompt engineer". 🤮

[–] moonshadow@slrpnk.net 13 points 3 weeks ago
load more comments (2 replies)
[–] OctopusNemeses@lemmy.world 23 points 3 weeks ago

Isn't this a thing that authoritarians do. They co-opt language. It's the same thing conservatives do. The venn diagram of tech bros and the far right is too close to being a circle.

You can pretty put any word out of the dictionary into a search engine and the first results are some tech company that took the word either as their company name or redefined it into some buzzword.

load more comments (1 replies)
[–] udon@lemmy.world 50 points 3 weeks ago (3 children)

If these "signs of AI writing" are merely linguistic, good for them. This is as accurate as a lie detector (i.e., not accurate) and nobody should use this for any real world decision-making.

The real signs of AI writing are not as easy to fix as just instructing an LLM to "read" an article to avoid them.

As a teacher, all of my grading is now based on in person performances, no tech allowed. Good luck faking that with an LLM. I do not mind if students use an LLM to better prepare for class and exams. But my impression so far is that any other medium (e.g., books, youtube explanation videos) leads to better results.

load more comments (3 replies)
[–] snoons@lemmy.ca 38 points 3 weeks ago

Fuck you, Siqi Chen.

[–] markstos@lemmy.world 34 points 3 weeks ago

Congrats on inventing what high school students figured out a year ago to skirt AI homework detectors.

[–] pedz@lemmy.ca 33 points 3 weeks ago (6 children)

In French, one of the way to spot AI writing is that sentences will often miss articles or have bad grammar. Can this dude also ask the LLM to include more articles and make complete sentences in the language it's trying to imitate?

I was using the Discover feed on my phone but Google started to insert rewritten stories & headlines by AI and they were so annoyingly bad at making simple sentences in French that it made me stop using that thing.

[–] destructdisc@lemmy.world 18 points 3 weeks ago

We'd rather the dude kill the LLM entirely. No one needs that shit

load more comments (5 replies)
[–] CileTheSane@lemmy.ca 32 points 3 weeks ago* (last edited 3 weeks ago) (6 children)

"just tell your LLM not to do that"

You ever ask an LLM to modify a picture and "don't change anything else"? It's going to change other things.

Case in point: https://youtu.be/XnWOVQ7Gtzw

[–] MML@sh.itjust.works 17 points 3 weeks ago (1 children)

That's why you always add "and no mistakes"

load more comments (1 replies)
load more comments (5 replies)
[–] avidamoeba@lemmy.ca 26 points 3 weeks ago (2 children)

From the repo:

Have opinions. Don't just report facts - react to them. "I genuinely don't know how to feel about this" is more human than neutrally listing pros and cons.

[–] JcbAzPx@lemmy.world 12 points 3 weeks ago

That will at least be easy to spot in a Wikipedia entry.

load more comments (1 replies)
[–] ZDL@lazysoci.al 23 points 3 weeks ago (1 children)

What is wrong in the techbrodude head that makes them only think of ruining things? Like it seems to me that they literally spend their days looking at things that are good and saying "what can I do to fuck this up for a profit?"

Should being a techie go into the DSM-V as a subheading under narcissistic personality disorder?

load more comments (1 replies)
[–] JackBinimbul@lemmy.blahaj.zone 22 points 3 weeks ago

I am so goddamned tired of AI being shoved into every collective orifice of our society.

[–] cheesybuddha@lemmy.world 18 points 3 weeks ago (2 children)

So they are using AI to make it so AI can't detect that they are using AI?

What kind of technological ouroborous of nonsense is this?

load more comments (2 replies)
[–] RoidingOldMan@lemmy.world 17 points 3 weeks ago

It can't avoid doing those things. That's the reason for the article.

[–] phonics@lemmy.world 16 points 3 weeks ago (1 children)

Bro isnt even gonna check its output anyway.

load more comments (1 replies)
[–] minorkeys@lemmy.world 15 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

It's an arms race, AI identification vs AI adaptation. I wonder which side the companies that own these LLMs want to win...

load more comments (1 replies)
[–] HotsauceHurricane@lemmy.world 11 points 3 weeks ago

Jesus Christ what a wretched twit of a man.

[–] gustofwind@lemmy.world 11 points 3 weeks ago (5 children)

And now you know how and why so many programmers are just fucking awful and literally responsible for the hell we’re living in

Kinda surprised how they don’t get more hate programmers fucking suck

[–] Jankatarch@lemmy.world 26 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Wow, such programmer.

Especially that "investor" in twitter bio and all his posts about finance.

Hell even if he was a programmer, disney hires artists as well. Entire art community is transphobic now?

(I am sorry if comment was meant to be satirical)

Edit : He is apparently a CEO too.

[–] MountingSuspicion@reddthat.com 9 points 3 weeks ago

I was about to defend the lack of contributions and then I kept reading. I have a handful of different accounts I use and some have the same look about them, but yea the investor thing is an obvious tell.

load more comments (1 replies)
[–] green_red_black@slrpnk.net 13 points 3 weeks ago (2 children)

You do know Programers are behind the Fediverse correct?

load more comments (2 replies)
load more comments (3 replies)
[–] felixthecat@fedia.io 9 points 3 weeks ago (2 children)

Stuff like that doesn't always work though, at least on free versions in my experience. I use Ai to write flowery emails to people to sound nice when I normally wouldn't bother and I used it to negotiate buying my car. I would continually tell it not to use - dashes while writing emails. And inevitably after 1 answer it would go back to using them.

Maybe paid versions are different but on free ones you have to continually correct it.

load more comments (2 replies)
load more comments
view more: next ›