this post was submitted on 15 Sep 2025
142 points (82.3% liked)
Technology
75191 readers
2791 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
These kinds of questions are strange to me.
A great many people are using them voluntarily, a lot of people are using them because they don't know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn't choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don't think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn't know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Isn't that what you yourself are doing, right now?
Yes, because people have more than one single criterion for determining whether a tool is "better."
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn't trying to present an argument for which is "better" in the first place, I should note. I'm just pointing out that AI isn't going to "go away." A huge number of people want to use AI. You may not personally want to, and that's fine, but other people do and that's also fine.
A lot of people want a good tool that works.
This is not a good tool and it does not work.
Most of them don't understand that yet.
I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.
I'm optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.
A better tool may come along, but it's not this stuff. Sometimes the future of a solution doesn't just look like more of the previous solution.
For you, perhaps. But there are an awful lot of people who seem to be finding it a good tool and are getting it to work for them.