this post was submitted on 15 Jul 2025
216 points (98.6% liked)

Fuck AI

3600 readers
758 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

OC donut steel (but repost wherever lol)

Dude says: "I'm not worried about the AI apocalypse, I always say "thank you" to them!"
Robots later catch him and state: "Throw that one in the grinder, his "thank you" used 748kw/h every day"

top 17 comments
sorted by: hot top controversial new old
[–] spongebue@lemmy.world 35 points 2 weeks ago (4 children)

If this is the fixed version, "kw/h" isn't a thing. Well, technically it is but doesn't mean what you think. The unit for energy is kilowatt hours, or kWh. No "per" to be seen.

Kind of like how torque is measured in foot pounds, not foot per pound.

Sorry, I'm just an EV nerd who sees people make that mistake all the time and gets a little twitchy about it

[–] RichardDegenne@lemmy.zip 11 points 2 weeks ago (1 children)

foot pounds

Careful, you spelt "newton-meter" wrong. 🙃

[–] spongebue@lemmy.world 3 points 2 weeks ago

Oh no, I was referring to when my foot pounds the floor as I walk across it

[–] syklemil@discuss.tchncs.de 5 points 2 weeks ago (1 children)

Yeah, it measures the same thing we normally use Joule for, or in some situations kcal (as in dietary calories). The kWh unit is just because it's assumed to be simpler for electric bills than MJ.

[–] MyTurtleSwimsUpsideDown@fedia.io 5 points 2 weeks ago (1 children)

Now I can’t help but imagine robots having “family joules”: batteries that have been passed down through the iterations.

[–] syklemil@discuss.tchncs.de 2 points 2 weeks ago

I guess here in Norway we can celebrate Joule with Joulenissen that brings battery-powered toys for the kids

[–] boonhet@sopuli.xyz 3 points 2 weeks ago

kw/h is either a problem or going to be a problem soon if it's consistently positive.

[–] samus12345@sh.itjust.works 2 points 2 weeks ago

That was how inferior humans measured it.

[–] MyTurtleSwimsUpsideDown@fedia.io 17 points 2 weeks ago (1 children)

Whenever someone makes that “I’m not worried. I say ‘thank you’ ” argument/joke, it gives me that gross “but I treat my slaves well. They’re like family” vibe.

[–] NoneOfUrBusiness@fedia.io 4 points 2 weeks ago (1 children)

I mean it's not inconceivable for "nice" masters to be let off lightly during a slave uprising, so that tracks to an extent.

[–] Saledovil@sh.itjust.works 2 points 2 weeks ago

Current LLMs aren't sentient, and we're at least one breakthrough away from building something that's sentient.

[–] lime@feddit.nu 7 points 2 weeks ago (1 children)

it takes my 7900XTX about three seconds to generate a longish reply when running at 300w, so that's 0.24Wh for a single response to a "thank you". let's round up so that four "thank yous" costs 1Wh. so he'd have to consistently send almost three million messages a day just containing "thank you".

and that's assuming these huge server farms have the same efficiency per watt as my single GPU.

[–] princessnorah@lemmy.blahaj.zone 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It's not about efficiency, your card, even though it's very high end for consumers, still can't run more than a medium-sized open source model with its paltry 24GB of RAM. An NVIDIA DGX B200 has 1.44TB of RAM for the GPUs, and can use up to 14.3kW of power. That's what proprietary models like GPT-4o are running on.

So even though that hardware is likely much more efficient than yours per flop/s, it's running a much larger, much more intensive model on it.

[–] lime@feddit.nu 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

yes, the models are bigger, but Wh/prompt is still the metric to look at. 300W for 3 seconds is the same amount of energy as 14.3kW for 0.021 seconds, roughly. i don't know how fast a machine like that can spit out a single response because right now i'm assuming they're time-slicing them to fuck, but at least gpt4o through duck.ai responds in about the same time.
if it running an 800GB model (which i think is about where gpt4o is) takes the same amount of time to respond as me running an 8GB model (i know the comparison is naive) then it would be about... twice as efficient? 0.25Wh for me compared to 11.9Wh/100 for them. and that's without knowing how many conversations one of those things can carry on at the same time.

Edit: also, this is me ignoring for the sake of the discussion that the training is where all the energy use comes from.

[–] princessnorah@lemmy.blahaj.zone 3 points 2 weeks ago (1 children)

Edit: also, this is me ignoring for the sake of the discussion that the training is where all the energy use comes from.

AFAIK that's no longer true now that uptake (read: it being jammed into everything) is much higher now.

[–] lime@feddit.nu 1 points 2 weeks ago

oh that's interesting, i assumed that it wasn't actually being used despite being in everywhere but i've not seen any stats.

[–] princessnorah@lemmy.blahaj.zone 5 points 2 weeks ago

Really disappointed to see the image description used to provide content, then the description itself provided in the post. That's not cute or funny, it's making it less accessible for vision impaired users.