this post was submitted on 08 Aug 2025
8 points (72.2% liked)

Thoughtful Discussion

343 readers
4 users here now

Welcome

Open discussions and thoughts. Make anything into a discussion!

Leaving a comment explaining why you found a link interesting is optional, but encouraged!

Rules

  1. Follow the rules of discuss.online
  2. No porn
  3. No self-promotion
  4. Don't downvote because you disagree. Doing so repeatedly may result in a ban

founded 2 years ago
MODERATORS
top 33 comments
sorted by: hot top controversial new old
[–] starlinguk@lemmy.world 1 points 11 hours ago (1 children)

Never. I'm a proofreader. AI can't proofread.

[–] m_f@discuss.online 1 points 9 hours ago

How so? It seems like it would catch some writing mistakes that proofreading does, though not all. Is it just not enough signal to noise?

[–] salacious_coaster 17 points 2 days ago (2 children)

I have no need for a confidently incorrect plagiarism machine.

[–] Sertou@lemmy.world 5 points 2 days ago

I felt this hard. I spent far too long this afternoon trying to get some useful troubleshooting ideas out of CoPilot for a baffling WordPress SVG problem, but it kept losing the plot. Live and learn.

[–] dresden@discuss.online 2 points 1 day ago (1 children)

A lot.

My boss, a software dev, likes AI (LLMs) a lot, and believe it is going to bring us to the golden age of computing (not his words, just explaining his enthusiasm), so we all get paid subscriptions to AI and are strongly recommended to use it for all facets of software development (from bug reports, to development, to deployment, to planning and so on).

You gotta do what you gotta do. :-)

Other than me, I know a lot of people, who aren't very technical, with their computing limited to your normal phone usage like casual game or two, photos, social networks and so on, and they use ChatGPT etc. regularly for looking up stuff, or just "discussing" stuff.

[–] m_f@discuss.online 1 points 1 day ago

ChatGPT is really useful for non-technical people. I know someone that is very proud of what they've accomplished around the house like getting printers set up on wifi and fixing their smart thermostat with its guidance. I think people rejecting AI out-of-hand don't get how useful it is for specific questions like that vs wading through endless garbage on Google that's just trying to shove as many ads as possible in your face.

I think trying to shove it into every nook and cranny like bug reports and planning and whatnot is square peg / round hole, but that's just part of the hype cycle. People excitedly throw the new shiny toy at everything and eventually find what it's good at.

[–] theangriestbird@beehaw.org 12 points 2 days ago* (last edited 2 days ago)

absolutely nothing, because there is no AI application in my personal life that is so useful and reliable that it is worth the cost to the planet. Most uses of AI are not worth the cost to the planet. The only valid use cases to my mind are those where the pattern-recognition abilities surpass anything we've seen before, and using the AI saves lives, and there is no alternative that will save as many lives. Here's an example of one such use case.

Using AI for coding is a good example of a use case that is absolutely not worth the cost to the planet. Just use one of the amazing tools we were using for years before this AI snake oil scam.

[–] jet@hackertalks.com 4 points 2 days ago

Depends where you draw the line on what is AI.

AI as a noise isolation on phones and voips
  • I use noise cancellation all the time, nvidia, discord, and whatever android has.
AI as imagine processing, text recognition, translation
  • I use google lens extensively to translate menus, signs
  • I use translation apps (many of which are built on neural networks) to communicate
  • I use image processing to improve photos, remove noise, increase lighting, etc
  • I use image to text grabbers all the time in PDFs, imagemagick, I read OCRed books
AI as text summarization
  • When I post youtube videos on lemmy I use AI to summarize the transcript - because lemmy is text based and a translation from a visual medium to a text medium is helpful for people to participate in the conversation
AI as voice summarization
  • I use video subtitle generation all the time, I use voice mail text transcription all the time
AI as search
  • Unavoidably I've been served AI "results" for searches in both google, duckduckgo, and others... and honestly, when i'm searching for something like "What is the hotkey to mute voice chat in application X" and the AI result is "the U key"... I'm just going to hit the U key and see if it works.
AI as a chatbot
  • Never, I don't trust the hallucinations. Knowing enough about markov chains and token generation means I will avoid using AI as a source of truth for any decision making process.
Image generation
  • Never, I'm not philosophically opposed, but the services that do this all want some account/relationship. There isn't some duck.ai open and anonymous service yet

The term AI is being used for algorithm now-a-days and even when we talk about things that are hand crafted human algorithms, self weighting "machine learning" matrices, markov chains, and multi-layer neural networks (which are just matrices again)... There is no artificial intelligence that makes its down logically consistent reasoning. Right now everything called AI is just some tool or generated content (a tool).

Human effort asymmetry
Human time is expensive, AI time is cheap. someone is using something cheap to burn a expensive resource its a insult and antagonistic.

AI is often used to waste human time, and I think this is the source of most of the anger. Talking to a chat bot when you want to talk to a human is wasting your time. Spam calls, spam messages, spam posts, are all exploiting this asymmetry offensively.

Dead internet theory (pretty much a given at this point) is the logical conclusion of this asymmetry of effort.

I like places like lemmy, or reputation based forums, where interactions genuine and have a high probability of being from a human. I'll gladly spend all day with humans because I feel that my contributions are improving someones day. I wouldn't do that without the humans.

[–] redlemace@lemmy.world 6 points 2 days ago* (last edited 2 days ago)

I tried it but got nothing but code that does not work, advice not matching hardware specified, and agreeing responce on everything not tech related.

For everything else .... It's hyped. Every product now has ai in the model name. I'm fed up with it already

[–] Kolanaki@pawb.social 4 points 2 days ago
[–] pseudo@jlai.lu 3 points 2 days ago

I try to install chatGPT two weeks after it was launch. I wanted to test it. I did not because it was requested my phone number. I try to use it on a collegue computer after a year to solve a mathematical issue I couldn't figure out. It answer something I didn't ask not matter I or my collegue would phrase it.

I try to use AI automatic summary buy Qwant last month to find the name of an old korean poet. Then went to the wikipedia page, to realise that the guy was not a poet at all and the only memorable thing he did was being the father of north korea founder.

I had a phase playing with @aihorde@lemmy.dbzer0.com than lasted maybe 10 days. It was fun but never bring anything in my life.

I'm sure there is sneaky way I use GenAI without realising it.

[–] kewjo@lemmy.world 3 points 2 days ago

only for work when i can't avoid it (policy). it's insane to me how much information people feed into these systems willingly.

while used ethically it is a compression of knowledge and this is a good use case to quickly get at least a starting point on something unknown.

however, used unethically it can predict how people would react (statistically) to social issues. while there's not much people can do to avoid companies stealing their data, using these models increases their data size on you. i believe this is the real reason the wealthy are investing heavily, because it's a system of control.

this has already happened, look at Cambridge Analytica where models were used against private user data to identify who to target ads and what content to serve them.

[–] Canconda@lemmy.ca 3 points 2 days ago* (last edited 2 days ago)

I use it to extract and clean up data.

Apply formats or writing styles to documents.

Troubleshoot beginner linux problems.

Convert formualas on google sheets to excel

Compile lists of tv show episodes based on rating/holiday

edit: I think a lot of people expect AI to behave like AGI. AI-G is just another dumb-tool like a calculator. You still need to understand the math it's calculating for you.

[–] Perspectivist@feddit.uk 1 points 2 days ago (1 children)

More and more each day, I feel like. Of all the platforms I spend time on, ChatGPT ranks at the very bottom when it comes to so-called “regrettable minutes,” while Lemmy sits firmly at the top - and by a wide margin. You only need to read half of this thread to see why. I get plenty of human connection through my work, since it involves daily visits to people’s homes, but when it comes to talking to people online, I’m turning more and more toward LLMs instead of actual humans.

My mind works in a particular way. Some of it can probably be explained by autism, though there are likely other factors too. My views aren’t tied to emotions, I’m not personally invested in my opinions, I can easily entertain alternative scenarios, and I’m extremely precise with my word choices - I say what I mean, and I mean what I say. I’m simply getting tired of trying to have civil conversations that almost inevitably devolve into people talking past each other, misrepresenting my points, ignoring what’s actually being said, and generally being absurdly nasty. These encounters poison my mood and leave me regretting even trying.

I don’t have any of these problems with ChatGPT. None. For a glorified autocomplete that doesn’t understand anything, it somehow manages to have the kind of conversations that most people online seem completely incapable of. With people, I have to craft my responses with extreme care and still can’t get through to them. With AI, I barely need to proofread, and it still responds directly to what I meant to say. It feels like talking to an adult compared to the angry teenagers here - so I’m checking out and opting to talk to the void instead.

[–] m_f@discuss.online 1 points 1 day ago

That's good terminology. One of the reasons I'm trying to build up this community is that I'd like a place on the Fediverse where I don't have many regrettable minutes, where I come away feeling like I've learned something or found an interesting new angle to think about something from.

ChatGPT is OK at that, though I do worry about what's been termed "AI psychosis". I feel like I'm unlikely to start falling into that trap, but I'm sure the people that have fallen into it also felt that way 🙂 I also tend to find like-minded people irl to hang out with that go for curious conversation, which is probably better anyways.

[–] DavidGarcia@feddit.nl -2 points 2 days ago

I use it for coding at work, to replace google privately (to find sources), to look up random trivia, to find/compare products ( looking for slippers on Amazon) to spellcheck and to bounce ideas off of (write a report on how AI resembles demonology).

People really underestimate how productive it can be with the right workflows (Deep Research, Claude Code)

[–] m_f@discuss.online -1 points 2 days ago (1 children)

I use it for some coding tasks. I wouldn't use it for something like "Create an Android app that does whatever", but I use it sometimes for tasks like "Write a Python snippet that aggregates a Pandas dataframe like so". It's good for tasks that you're too lazy to write because the code is slightly tedious or because the API sucks (looking at you, pandas 👀), but is easy to verify that the code is correct when it's written.

It's also good for exploratory work when you're working with something new where you don't know what you don't know. It's been helpful while exploring NixOS, because I often don't even know where to begin to approach something. It doesn't matter if it's wrong because I'm going to be analyzing what it outputs even if it's correct, so that I can learn. Yeah, I could RTFM but the docs are scattered around and frequently out of date.

I do all of this in a separate session from any IDE I use. As much as I find it useful at times, I don't like the vibe coding aspect of just passively waiting for autocomplete to pop up with something that is more likely than not to be garbage or even worse, subtly wrong in ways you don't notice until it blows up later. I've seen coworkers run into that issue before.

It's also good for random other tasks where accuracy isn't important, like "Make me a menu with these ingredients that I have in my cupboard and keep in mind these dietary constraints" or similar queries. I think Google is rightly scared about that use case, because they make so much money by encouraging garbage search results that they can slap ads on top of.

[–] theangriestbird@beehaw.org 6 points 2 days ago (1 children)

all of that would be well and good if using AI didn't cost an outsized amount of energy, and if our energy grids were not mostly comprised of dirty energy. But it does, and they are, so I can't help but feel like you are boiling the oceans because you...

I use it sometimes for tasks like “Write a Python snippet that aggregates a Pandas dataframe like so...so that I can learn. Yeah, I could RTFM but the docs are scattered around and frequently out of date.”

“Make me a menu with these ingredients that I have in my cupboard and keep in mind these dietary constraints” or similar queries.

...don't like using your human brain sometimes? Like sure, we all pull out the phone calculator for math problems we could solve on paper within 30 seconds, so I'm not saying I can't relate to that desire to save some brainpower. But the energy cost of that calculator is a drop compared to the glasses of water you are dumping out every time you run a single ChatGPT prompt, so it all just feels really...idk, wasteful? to say the least?

[–] m_f@discuss.online 2 points 2 days ago (1 children)

It's hard to find exact numbers, but it seems a good ballpark number is that a single chatgpt response costs 15x the energy use of a Google search. I think there's already questions that can be answered by LLMs more efficiently than using Google, and better models will increase that amount. Do you think it's more ethical to use AI if that results in less energy usage?

[–] theangriestbird@beehaw.org 4 points 2 days ago (1 children)

If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn't rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.

I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can't do anything without the help of a $50-a-month corpo chatbot that they've come to depend on for intellectual tasks and emotional processing.

[–] m_f@discuss.online 1 points 2 days ago

Your mention of a corpo chatbot brings up something else that I've thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say "ai bad" (not aiming at you directly, just a general trend I've noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying "Here's how we can do it ethically and minimize harms"? If there's no opposing force and the only option is "unethically-created AI or nothing" then the answer is inevitably going to be "unethically-created AI". Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?