this post was submitted on 08 Aug 2025
8 points (72.2% liked)
Thoughtful Discussion
343 readers
30 users here now
Welcome
Open discussions and thoughts. Make anything into a discussion!
Leaving a comment explaining why you found a link interesting is optional, but encouraged!
Rules
- Follow the rules of discuss.online
- No porn
- No self-promotion
- Don't downvote because you disagree. Doing so repeatedly may result in a ban
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I use it for some coding tasks. I wouldn't use it for something like "Create an Android app that does whatever", but I use it sometimes for tasks like "Write a Python snippet that aggregates a Pandas dataframe like so". It's good for tasks that you're too lazy to write because the code is slightly tedious or because the API sucks (looking at you, pandas 👀), but is easy to verify that the code is correct when it's written.
It's also good for exploratory work when you're working with something new where you don't know what you don't know. It's been helpful while exploring NixOS, because I often don't even know where to begin to approach something. It doesn't matter if it's wrong because I'm going to be analyzing what it outputs even if it's correct, so that I can learn. Yeah, I could RTFM but the docs are scattered around and frequently out of date.
I do all of this in a separate session from any IDE I use. As much as I find it useful at times, I don't like the vibe coding aspect of just passively waiting for autocomplete to pop up with something that is more likely than not to be garbage or even worse, subtly wrong in ways you don't notice until it blows up later. I've seen coworkers run into that issue before.
It's also good for random other tasks where accuracy isn't important, like "Make me a menu with these ingredients that I have in my cupboard and keep in mind these dietary constraints" or similar queries. I think Google is rightly scared about that use case, because they make so much money by encouraging garbage search results that they can slap ads on top of.
all of that would be well and good if using AI didn't cost an outsized amount of energy, and if our energy grids were not mostly comprised of dirty energy. But it does, and they are, so I can't help but feel like you are boiling the oceans because you...
...don't like using your human brain sometimes? Like sure, we all pull out the phone calculator for math problems we could solve on paper within 30 seconds, so I'm not saying I can't relate to that desire to save some brainpower. But the energy cost of that calculator is a drop compared to the glasses of water you are dumping out every time you run a single ChatGPT prompt, so it all just feels really...idk, wasteful? to say the least?
It's hard to find exact numbers, but it seems a good ballpark number is that a single chatgpt response costs 15x the energy use of a Google search. I think there's already questions that can be answered by LLMs more efficiently than using Google, and better models will increase that amount. Do you think it's more ethical to use AI if that results in less energy usage?
If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn't rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.
I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can't do anything without the help of a $50-a-month corpo chatbot that they've come to depend on for intellectual tasks and emotional processing.
Your mention of a corpo chatbot brings up something else that I've thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say "ai bad" (not aiming at you directly, just a general trend I've noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying "Here's how we can do it ethically and minimize harms"? If there's no opposing force and the only option is "unethically-created AI or nothing" then the answer is inevitably going to be "unethically-created AI". Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?