this post was submitted on 03 Aug 2025
294 points (86.2% liked)

Fuck AI

3612 readers
740 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 13 points 21 hours ago* (last edited 21 hours ago) (3 children)

the fact that it is theft

There are LLMs trained using fully open datasets that do not contain proprietary material... (CommonCorpus dataset, OLMo)

the fact that it is environmentally harmful

There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave...

the fact that it cuts back on critical, active thought

This is a usecase problem. LLMs aren't suitable for critical thinking or decision making tasks, so if it's cutting back on your "critical, active thought" you're just using it wrong anyway...

The OOP genuinely doesn't know what they're talking about and are just reacting to sensationalized rage bait on the internet lmao

[–] csh83669@programming.dev 15 points 20 hours ago (2 children)

Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.

[–] masterspace@lemmy.ca 1 points 19 hours ago

Yeah man, guess show much energy it would take to draw the 4k graphics on your phone screen in 1995?

[–] theunknownmuncher@lemmy.world -1 points 20 hours ago* (last edited 13 hours ago) (2 children)

Saying it uses less power that a toaster is not much

Yeah but we're talking a fraction of 1%. A toaster uses 800-1500 watts for minutes, local LLM uses <300 watts for seconds. I toast something almost every day. I'd need to prompt a local LLM literally hundreds of times per day for AI to have a higher impact on the environment than my breakfast, only considering the toasting alone. I make probably around a dozen-ish prompts per week on average.

That’s still a LOT of electricity.

That's exactly my point, thanks. All kinds of appliances use loads more power than AI. We run them without thinking twice, and there's no anti-toaster movement on the internet claiming there is no ethical toast and you're an asshole for making toast without exception. If a toaster uses a ton of electricity and is acceptable, while a local LLM uses less than 1% of that, then there is no argument to be made against local LLMs on the basis of electricity use.

Your argument just doesn't hold up and could be applied to literally anything that isn't "required". Toast isn't required, you just want it. People could just stop playing video games to save more electricity, video games aren't required. People could stop using social media to save more electricity, TikTok and YouTube's servers aren't required.

People don’t need to burn down a rainforest to summarize a meeting.

Strawman

[–] wizardbeard@lemmy.dbzer0.com 3 points 10 hours ago (1 children)

I won't call your point a strawman, but you're ignoring the actual parts of LLMs that have high resource costs in order to push a narrative that doesn't reflect the full picture. These discussions need to include the initial costs to gather the dataset and most importantly for training the model.

Sure, post-training energy costs aren't worth worrying about, but I don't think people who are aware of how LLMs work were worried about that part.

It's also ignoring the absurd fucking AI datacenters that are being built with more methane turbines than they were approved for, and without any of the legally required pollution capture technology on the stacks. At least one of these datacenters is already measurably causing illness in the surrounding area.

These aren't abstract environmental damages by energy use that could potentially come from green power sources, these aren't "fraction of a toast" energy costs only caused by people running queries either.

[–] theunknownmuncher@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago)

Nope, I'm not ignoring them, but the post is specifically about exceptions. The OOP claims there are no exceptions and there is no ethical generative AI, which is false. Your comment only applies to the majority of massive LLMs hosted by massive corporations.

The CommonCorpus dataset is less than 8TB, so fits on a single hard drive, not a data center, and contains 2 trillion tokens, which is a relatively similar amount of tokens that small local LLMs are typically trained with (OLMo 2 7B and 13B were trained on 5 trilion tokens).

These local LLMs don't have high electricity use or environmental impact to train, and don't require a massive data center for training. The training cost in energy is high, but nothing like GPT4, and is only a one time cost anyway.

So, the OOP is wrong, there is ethical generative AI, trained only on data available in the public domain, and without a high environmental impact.

[–] PixelatedSaturn@lemmy.world 6 points 19 hours ago (1 children)

That's nothing. People aren't required to eat so much meat, it even eat so much food.

I also don't like this energy argument of anti ai, when everything else in our lives already consumes so much.

[–] theunknownmuncher@lemmy.world 0 points 19 hours ago (1 children)

You can easily use less power in other ways, too; it's not one or the other. Let's do both.

[–] hpx9140@fedia.io 10 points 21 hours ago (1 children)

You're implying the edge cases you presented are the majority being used?

[–] theunknownmuncher@lemmy.world 10 points 20 hours ago* (last edited 13 hours ago) (2 children)

No, and that's irrelevant. Their post is explicitly not about the majority, but about exceptions/edge cases.

I am responding to what they posted (I even quoted them), showing that the position that "there is no ethical use for generative AI" and that there are no exceptions is provably false.

I didn't think it needed to be said because it's not relevant to this discussion, but: the majority of AI sucks on all fronts. It's bad for intellectual property, it's bad for the environment, it's bad for privacy, it's bad for people's brains, and it's bad at what it's used for.

All of these problems are not inherent to AI itself, and instead are problems with the massive short-term-profit-seeking corporations flush with unimaginable amounts of investor cash (read: unimaginable expectations and promises that they can't meet) that control the majority of AI. Once again capitalism is the real culprit, and fools like the OOP will do these strawman mental gymnastics and spread misinformation to defend capitalism at all costs.

I used AI to scratch my balls once. I assume this counts as ethical.

[–] hpx9140@fedia.io 6 points 19 hours ago (1 children)

I can get behind this clarification, so thanks for that.

I'm a realist. To that end, relevance is assigned less on the basis on pedantic deconstruction on a single post and more on the practical reality of what is unfolding around us. Are there ethical applications for generative AI? Possibly. Will they become the standard? Unlikely, given incumbent power structures that are defining and dictating long term use.

As with most things stitched into the human experience, gaming human psychology/behavioral mechanics are key to trendsetting. What the majority accepts is what reality re-acclimates to. At the moment, that appears to be mass adoption of unethical AI systems.

I don't disagree on these problems not being inherent to AI. But that sentiment has the same flavour as 'guns don't kill people' ammosexuals like to bust out when confronted.

Either way, it's clear you have a good read on what needs to happen to get all this to a better place. Hope you keep fighting to make that happen.

[–] theunknownmuncher@lemmy.world 4 points 19 hours ago (1 children)

Yeah, agreed. But that's not what the OOP is saying in their post and their attitude and language makes me believe they're purposefully being wrong and outrageous for attention/trolling

[–] hpx9140@fedia.io 4 points 19 hours ago

Yeah, don't blame you for cracking the whip on hyperbole. Its good to have someone doing that to keep us sane.

What OOP is reacting to is the majority sentiment thats saturating the feed they're swimming through. It's a messy response, but the direction they're pointed in is generally correct - and a lot more aligned with your position than you might expect, despite fumbling the details.