this post was submitted on 03 Aug 2025
409 points (86.7% liked)

Fuck AI

3753 readers
479 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] pixxelkick@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (2 children)

Not at tremendously less of a power cost anyways. My laptop draws 35W

5 minutes of GPT is genuinely less power consumption than several hours of my laptop being actively used to do the searching manually. Laptops burn non trivial amounts of power when in use. Anyone who has held a laptop on their lap can attest to the fact they aren't exactly running cold.

Hell even a whole day of using your mobile phone is non trivial in power consumption, they also use 8~10W or so.

Using GPT for dumb shit is arguably unethical, but only in the sense that baking cookies in the oven is. You gonna go and start yelling at people for making cookies? Cooking up one batch of cookies burns WAAAY more energy than fucking around with GPT. And yet I don't see people going around bashing people for using their ovens to cook things as a hobby.

There's no good argument against what I did, by all metrics it genuinely was the ethical choice.

[–] jjjalljs@ttrpg.network 4 points 1 week ago (1 children)

Client side power usage for conventional Internet search is about the same as chatgpt. I'm not sure why you're talking about laptop power usage.

Conventional search is less likely to lie, though.

[–] pixxelkick@lemmy.world 0 points 1 week ago (1 children)

The power server side for 5 minutes of chatgpt, vs the power burned browsing the internet to find the info on my own (which would take hours to manually sift through)

Thats the comparison.

Even though server side power consumption to run GPT is very high, its not so high that its more than hours and hours of a laptop usage

[–] jjjalljs@ttrpg.network 2 points 1 week ago (1 children)

Oh, I see the point you're making.

I assumed that the information was there to be found, and a regular search would have returned it. Thus it would not have taken hours.

Personally I don't really trust the LLMs to synthesize disparate sources.

[–] pixxelkick@lemmy.world 3 points 1 week ago

Personally I don’t really trust the LLMs to synthesize disparate sources.

The #1 best use case for LLMs is using them as extremely powerful fuzzy searchers on very large datasets, so stuff like hunting down published papers on topics.

Dont actually use their output as the basis for reasoning, but use it to find the original articles.

For example, as a software dev, I use them often to search for the specific documentation for what I need. I then go look at the actual documentation, but the LLM is exceptionally fast at locating the document itself for me.

Basically, using them as a powerful resource to look up and find resources is key, and was why I was able to find documentation on the symptoms of my pet so fast. It would have taken me ages to find those esoteric published papers on my own, there's so much to sift through, especially when many papers cover huge amounts of info and what Im looking for is one small piece of info in that one paper.

But with an LLM I can trim down the search space instantly to a way way smaller set, and then go through that by hand. Thousands of papers turn into a couple in a matter of seconds.

[–] wizardbeard@lemmy.dbzer0.com 3 points 1 week ago (1 children)

Querying the LLM is not where the dangerous energy costs have ever been. It's the cost of training the model in the first place.

[–] pixxelkick@lemmy.world 1 points 1 week ago (2 children)

The training costs effectively enter a "divide by infinity" argument given enough time.

While they continue to train models at this time, eventually you hit a point where a given model can be used in perpetuity.

Costs to train go down, whereas the usability of that model stretches on to effectively infinity.

So you hit a point where you have a one time energy cost to make the model, and an infinite timescale to use it on.

[–] Auth@lemmy.world 2 points 1 week ago

Costs to train are going up exponentially. In a few years corps are going to want a return on the investment and they're going to squeeze consumers.

eventually you hit a point where a given model can be used in perpetuity.

I can't wait for this to happen. Any day now, surely.