685
this post was submitted on 22 Feb 2026
685 points (99.3% liked)
Not The Onion
21049 readers
431 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, ableist, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If anything i think the better comparison is you use more power watching TV or gaming than you probably will using AI in the day if you do either of those 2 things.
The issue is training takes a lot of power, and because we can't run the hardware locally our usage is also placed in these data centers which put pressure on a specific area instead of distributing the same power usage.
I saw a post a couple days ago about a company etching the model weights into silicon chip and they made a 8b model that could do 16k t/s and once made are relatively cheap to produce, and in power requirements, and would only get better. Just need to make sure they can be recycled well as they'd end up on a 1 to 2 year cycle like phones. Model to chip in 60 days they said.
So maybe that's the future solution to distrubuted usage, but we would still need to solve training, but we could just mandate these datacenters must build their own renewable power and it would be less is everyone could run their own local inference.