this post was submitted on 03 Aug 2025
409 points (86.7% liked)
Fuck AI
3671 readers
979 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI saved my pets life. You won't convince me it's 100% all bad and there's no "right" way to use it.
The way it is trained isnt intellectual theft imo.
It only becomes intellectual theft if it is used to generate something that then competes with and takes away profits from the original creators.
Thus the intellectual theft only kicks in at generation time, but the onus is still on the AI owners for not preventing it
However if I use AI to generate anything that doesn't "compete" with anyone, then "intellectual theft" doesn't matter.
For example, when I used it to assist with diagnosing a serious issue my pet was having 2 months ago that was stumping even our vet and it got the answer right, which surprised our vet when we asked them to check a very esoteric possibility (which they dubious checked and then they were shocked to find something there.
They asked us how on earth we managed to guess to check that place of all things, how could we have known. As a result we caught the issue very early when it was easy to treat and saved our pets life
It was a gallbladder infection, and her symptoms had like 20 other more likely causes individually.
But when I punched all her symptoms into GPT, everytime, it asserted if was likely the gallbladder. It had found some papers on other animals and mammals and how gallbladder infections cause that specific combo of symptoms rarely, and encouraged us to check it out.
If you think "intellectual theft" still applies here, despite it being used to save an animals life, then you are the asshole. No one "lost" profit or business to this, no one's intellectual property was infringed, and consuming the same amount of power it takes to cook 1 pizza in my oven to save my pets life is a pretty damn good trade, in my opinion.
So, yes. I think I used AI ethically there. Fight me.
Regular search could have also surfaced that information
Not at tremendously less of a power cost anyways. My laptop draws 35W
5 minutes of GPT is genuinely less power consumption than several hours of my laptop being actively used to do the searching manually. Laptops burn non trivial amounts of power when in use. Anyone who has held a laptop on their lap can attest to the fact they aren't exactly running cold.
Hell even a whole day of using your mobile phone is non trivial in power consumption, they also use 8~10W or so.
Using GPT for dumb shit is arguably unethical, but only in the sense that baking cookies in the oven is. You gonna go and start yelling at people for making cookies? Cooking up one batch of cookies burns WAAAY more energy than fucking around with GPT. And yet I don't see people going around bashing people for using their ovens to cook things as a hobby.
There's no good argument against what I did, by all metrics it genuinely was the ethical choice.
Client side power usage for conventional Internet search is about the same as chatgpt. I'm not sure why you're talking about laptop power usage.
Conventional search is less likely to lie, though.
The power server side for 5 minutes of chatgpt, vs the power burned browsing the internet to find the info on my own (which would take hours to manually sift through)
Thats the comparison.
Even though server side power consumption to run GPT is very high, its not so high that its more than hours and hours of a laptop usage
Oh, I see the point you're making.
I assumed that the information was there to be found, and a regular search would have returned it. Thus it would not have taken hours.
Personally I don't really trust the LLMs to synthesize disparate sources.
The #1 best use case for LLMs is using them as extremely powerful fuzzy searchers on very large datasets, so stuff like hunting down published papers on topics.
Dont actually use their output as the basis for reasoning, but use it to find the original articles.
For example, as a software dev, I use them often to search for the specific documentation for what I need. I then go look at the actual documentation, but the LLM is exceptionally fast at locating the document itself for me.
Basically, using them as a powerful resource to look up and find resources is key, and was why I was able to find documentation on the symptoms of my pet so fast. It would have taken me ages to find those esoteric published papers on my own, there's so much to sift through, especially when many papers cover huge amounts of info and what Im looking for is one small piece of info in that one paper.
But with an LLM I can trim down the search space instantly to a way way smaller set, and then go through that by hand. Thousands of papers turn into a couple in a matter of seconds.
Querying the LLM is not where the dangerous energy costs have ever been. It's the cost of training the model in the first place.
The training costs effectively enter a "divide by infinity" argument given enough time.
While they continue to train models at this time, eventually you hit a point where a given model can be used in perpetuity.
Costs to train go down, whereas the usability of that model stretches on to effectively infinity.
So you hit a point where you have a one time energy cost to make the model, and an infinite timescale to use it on.
Costs to train are going up exponentially. In a few years corps are going to want a return on the investment and they're going to squeeze consumers.
I can't wait for this to happen. Any day now, surely.
The singular of data is not anecdote.
For every claim people make of how "Gen AI saved my " you can find a dozen stories of people being actively harmed by Gen AI.
Stopped watch and all that jazz.
Thats irrelevant to the discussion at hand.
That's like arguing needles were a bad invention because many people use them for heroin.
People using the tool wrong to hurt themselves doesn't mean the tool is bad, it just means better regulations and education needs to be put in place.
Keep talking that way if that's what helps you be happy with yourself when looking in a mirror.
The truth is that degenerative AI has no useful business model, is quite literally burning up the planet to feed having no viable business model, is killing economies while it has no useful business model, and is in general dumbing down the world, all while having no prospects for ever being a viable business.
I'm glad that your story about your dog is enough for you to burn down the planet with a clear conscience.
Buh-bye.
I hope you dont use any of the other standard quality of life features day to day that consume substantially more power per day then.
There's plenty of stuff you likely take for granted every day that you use, that burn way more fossil fuels than training GPT took.
GPT did cost a lot of power, but if you put it beside other fairly standard day-to-day things people tend to take for granted, it's a drop in the bucket.
The list goes on and on. ESPECIALLY your clothes dryer, that thing uses a massive amount of power
People seriously underestimate how much power the internet uses overall. GPT's training provides a concrete, discrete, measured amount of power one specific thing used.
Whereas the internet, as a whole, over one day, uses way more power than all of GPT's training took total. The issue is "the internet" has its power consumption broadly distributed across the entire globe, in a manner that makes it basically impossible to actually measure how much "total" power you are burning just browsing the web.
But it's non trivial. Every switch between you and your destination is burning in the range of 150 watts, easily, every router is burning easily 80 watts, etc etc.
And theres dozens of those between you and 1 given destination. The process of routing your packets from your machine all the way across countries at the speed of light, and then a response back, takes a non trivial amount of power. Theres often around 8 to 15 hops between you and the destination, and every single hop tends to have multiple machines involved in that one single packet.
Its easy to handwave that enormous power consumption away because, well, you can't see it. You aren't privvy to how much power your ISP burns every day, how much power the nameservers use, etc etc.
GPT is a non trivial chunk of power... but its not THAT much compared to all the other shit going on in the web, its genuinely just a tiny drop in the bucket.
You are extremely naive if you think using GPT makes any kind of notable shift in your total carbon footprint, it doesnt even move the dial at all.
If you actually wanna pick something as a real target for reducing your carbon footprint, the two biggest contenders are: