this post was submitted on 07 Oct 2025
94 points (98.0% liked)

Ask Lemmy

35014 readers
1657 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] makeshiftreaper@lemmy.world 151 points 4 days ago (10 children)

AI is untrustworthy and shouldn't be used

I have management talking about copilot usage rates and I hear people casually refer to "what ChatGPT told them" in conversation

[–] thespcicifcocean@lemmy.world 3 points 2 days ago

i actively zone out when anyone higher up than me talks about copilot or chat gpt. i also dressed down a colleague for using chat gpt for a stupid simple task.

[–] Konstant@lemmy.world 58 points 4 days ago (1 children)

The other day on Reddit someone was saying they just fact checked something with ChatGPT.

[–] Zak@lemmy.world 29 points 4 days ago (4 children)

AI is untrustworthy and shouldn’t be used

I have a more nuanced take. AI is simultaneously untrustworthy and useful. For many queries, DuckDuckGo and Google are performing considerably worse than they used to, while Perplexity usually yields good results. Perplexity also handles complex queries traditional search engines just can't.

About a third of the time, Perplexity's text summary of what it found is inaccurate; it may even say the opposite of what a source does. Reading the sources and evaluating their reliability is no less important than with traditional search, but much of the time I think I wouldn't have found the same sources that way.

Of course there are other issues with AI, such as power usage and Perplexity in particular being known for aggressive web scraping.

Nuance and depth isn't as popular as I'd like on or off Lemmy.

[–] makeshiftreaper@lemmy.world 19 points 4 days ago (1 children)

Ah, but you see, I never claimed AI isn't useful. In fact, you can check my comment history. I've agreed AI is a very useful tool, I still think it shouldn't be used for ethical, social, and personal reasons

A problem with nuance is that people want to discuss the specifics and nuances of what they care about but for the most part won't do that on subjects for other people. So you need to tailor your responses to your audience. FWIW on Lemmy I see a lot more instances of people with specificly opposed takes where both sides have similar vote counts. So while it's not perfect it's better than most

[–] village604@adultswim.fan 2 points 3 days ago (1 children)

You can theoretically have an ethical LLM. You can train one from the ground up on non-copyrighted materials using renewable energy.

But I think what a lot of people are forgetting is that it's not uncommon for technology to start off super inefficient. A computer used to take up an entire floor of an office building, and a hard drive with a few KB of storage used to be the size of a fridge.

Now you can have a system orders of magnitude more powerful that's the size of a postage stamp and consumes less than 1W of power.

[–] makeshiftreaper@lemmy.world 7 points 3 days ago

Lots of things theoretically exist: a reasonable terms and conditions, a functioning DMV, a unified charging standard, etc. I'm going to focus my energy on things that are real and not hope someone decides to be morally upstanding. If you're arguing that the bullshit machine that spreads lies that actively harm people could become so ubiquitous that it fits in any electronic device if we just keep giving it money, then I'd say you're making my argument for me

[–] village604@adultswim.fan 9 points 3 days ago (1 children)

I've found it to be extremely useful for stuff like one-off powershell commands that I'll use like 3x in my career.

Just today I was trying to find the command line switches for disk2vhd, and none of the top results, even the official page for the app, had them.

But Google's AI had them and provided sources I could use to verify the information.

But people didn't do that last part before AI, so I can see why it's an issue.

[–] YeahIgotskills2@lemmy.world 3 points 3 days ago

Absolutely. I recently needed to satisfy auditors with a report on our network security. Our main guy was on leave, but I quickly got the evidence I needed with a few powershell commands that I would have previously spent way more time googling.

It's also decent at reports and short, impersonal emails to suppliers etc. It frees up a lot of my time to do actual work, and for that I think it's decent.

Like basically everything in life, the truth is between the extremes. For me it's useful, but doesn't replace me and my team. I'm neither an AI evangelist or detractor. It's just another tool.

[–] phdepressed@sh.itjust.works 6 points 3 days ago (2 children)

I think ddg and Google are performing worse because of AI. Pushing their AI services and the tsunami of AI slop make a search harder than SEO did and deprioritizes fixing it.

[–] yardy_sardley@lemmy.ca 2 points 3 days ago

It's also a way to inflate the number of ads a user has to wade through before they find what they're looking for. Classic monopolist bullshit.

[–] village604@adultswim.fan 2 points 3 days ago

Ddg is performing worse because Microsoft raised the price on Bing API calls.

[–] bizarroland@lemmy.world 1 points 3 days ago

I think current state-of-the-art AI is useful for when you are not having a novel thought.

I believe that AI, at least in the form of LLMs, is currently incapable of novelty in the sense of creating a new concept or a new thought with reason and purpose behind it.

For instance, if I was going to write a book, I might consult with LLMs about how to fill in the slow gaps or the dead spaces in my storyline and to fully come up with a completely fleshed out story that I would then write without its assistance.

My assumption is that anything that it fills in is going to be cobbled together from literally hundreds of thousands of other similar stories, and therefore it will not be new or unique in any way.

If I was really trying to push the envelope, I would then assume that the right thing to do would be that whatever it says is ordinary and common, and if I want to be extraordinary and uncommon, then I need to use that as a launch point for my own gap-filling content.

Therefore, I could use an LLM to write a good story with a new concept, a new premise, a new storyline that is relatively unique and original by using the LLM to clearly identify those things that are not.

[–] Electricd@lemmybefree.net 1 points 2 days ago

Depends on the subject

[–] GalacticTaterTot@lemmy.world 7 points 3 days ago (2 children)

I think it is useful with a constrained dataset. Like using it to summarize things about a dataset, or dumping documents into it and asked getting info about it (like Gemini in Google Drive).

It is not useful for general question using the whole-ass internet as a dataset.

Also I wish it was called something other than AI...it's just a word guesser FFS.

We should are least refer to inference LLMs as LLMs. The fact that if you asked it something like who is the current CS2 top team, it would give you the top team at the time it was trained is enough proof that the models effectively know nothing.

[–] TubularTittyFrog@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

the only useful thing my company and collegues have fold for it is taking meeting notes. it just logs everything and summarizies stuff, and it's like 90% accurate, but it does make plenty of errors.

however, if i give a presentation with screen sharing, it can't do shit to summarize that.

[–] psx_crab@lemmy.zip 10 points 4 days ago

I have people telling me how to do my work because "That's what ChatGPT suggested, and they're always accurate".

🤷

[–] DeathByBigSad@sh.itjust.works 4 points 4 days ago (2 children)

Actual AGI would be trustworthy. The current "AI" is just a word salad blender program.

[–] makeshiftreaper@lemmy.world 8 points 4 days ago (2 children)

Would it? I run a science fiction book club and there're a lot of arguments that if something achieved human level intelligence that it would immediately try to kill us, not become our perfect servants

[–] uniquethrowagay@feddit.org 1 points 2 days ago

"It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin."

[–] DeathByBigSad@sh.itjust.works 5 points 4 days ago (2 children)

I believe in the Grand Plan, and I have faith in The Director. Begone, faction scum.

[–] riot@fedia.io 2 points 8 hours ago

I'd completely forgotten about this show. But your comment made me start a rewatch 3 days ago, and man, it's better than I remembered. Really has me hooked, and I just started season 2 now. Thanks!

[–] wirelesswire@lemmy.zip 4 points 4 days ago

That was a good show.

[–] Bluetreefrog@lemmy.world 5 points 3 days ago

It could be argued that people are AGI. Are they always trustworthy?

[–] HubertManne@piefed.social 2 points 3 days ago

how to say with this. I see pretty much an equal split between ai is best thing ever, ai will doom us all, and like ai has some uses and may get more but we need to make sure any use is worth the energy usage.

[–] scrubbles@poptalk.scrubbles.tech 1 points 4 days ago (2 children)

As a software developer I fully agree. People bash on it constantly here but the fact is is that it's required for our jobs now. I just made it through a job hunt and every tech screen I did they not only insisted on me using AI, but they figured how much I was using too.

The fact is is that like it or not it does speed us up, and it is a tool in our toolbelt. You don't have to trust it 100% or blindly accept what it does, but you do need to be able to use it. Refusing to use it is like refusing to use the designer for WinForms 20 years ago, or refusing to use an IDE at work. You're going to be at a massive disadvantage to your competing jobseekers who are more than happy to use AI.

[–] acchariya@lemmy.world 1 points 2 days ago* (last edited 2 days ago) (1 children)

I review take home assignments and mostly we receive AI submissions. It's easy to tell when they aren't AI though because we get thoughtful comments about why one choice was made over another, and comments on the higher level view that only come from product context and experience. I don't think one single fully ai-created submission has made it passed the code review part.

[–] scrubbles@poptalk.scrubbles.tech 2 points 2 days ago (1 children)

See it's hard as an interviewer because for the first time ever I lost points at one place because I didn't use AI at all, and they almost didn't say yes to me. Their feedback quite literally was that it functioned well, but I could have got it done faster with AI.

[–] acchariya@lemmy.world 2 points 2 days ago (1 children)

Seems pointless to test you on anything that could be done by ai, otherwise why even hire someone, just have fewer devs using more ai right? I want to test people on whether they have experience to notice things and make decisions. Idk if they generate the busy work but that isn't what I'm grading them on

Hey preaching to the choir there, but other companies were saying "if they didn't use AI for this they won't here either". For your interviewees sake, make sure AI use is extremely specific for every step in the interview. I had places where they were upset I didn't use AI at this step, but did on this other step. It's batshit out there.

[–] AstralPath@lemmy.ca 10 points 4 days ago (2 children)

The fact is is that like it or not it does speed us up

This is not a fact at all.

[–] scrubbles@poptalk.scrubbles.tech 4 points 3 days ago (1 children)
[–] bigfondue@lemmy.world 9 points 3 days ago

The people in the study thought so too

[–] shalafi@lemmy.world -1 points 3 days ago* (last edited 3 days ago)

That's dumbshits using it to do their job for them and trusting the output blindly. If you're using LLMs to get over the occasional hump they're awesome time savers.

I'm guessing you don't write code?