this post was submitted on 14 Mar 2026
91 points (86.4% liked)

Ask Lemmy

38847 readers
841 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

You can take "justifiable" to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] cyberpunk007@lemmy.ca 5 points 2 weeks ago* (last edited 2 weeks ago)

Yes I suck at the conversation piece of emails in certain scenarios and having a soundboard to bounce off of helps. I still know when it spews things in not quite a fan of but it does do the heavy lifting for me.

Even so, still not a fan overall. It's like launching a nuke at a country to kill a rat. It's so bad for the environment, our brains, and our independence (in terms of hardware ownership because.... Well. Y'all know. )

I guess my tl;dr is it's not truly worth it.

[–] fizzle@quokk.au 4 points 2 weeks ago

Anyone who gets paid according to their productivity, like self employed people, is going to "justify" the use of Gen AI if it genuinely makes them more productive.

No one is going to voluntarily reduce their income by even 10% just so they can say they don't use Gen AI.

I'm in this category, but honestly there are few situations where I've found it to be sufficiently helpful.

However, I think it's possible that more mature implementations of the tech we already have could change that.

For example, I don't think voice assistants have reached their potential yet. If they weren't always listening trying to figure out what to sell me, and they had a better range of actions they could perform, I might find myself using one more regularly.

[–] goat@sh.itjust.works 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It's as useful as a rubber duck. Decent at bouncing ideas off it when no one is available, or you can't be bothered to bother people about dumb ideas.

But at the moment, no, it's not justifiable as it directly fuels oligarchies, fascism in the US, and tech bros. Perhaps when the bubble pops.

[–] epicshepich@programming.dev 2 points 2 weeks ago (4 children)

What about a self-hosted instance?

[–] AA5B@lemmy.world 2 points 2 weeks ago (1 children)

To do what? I’m fairly optimistic about narrower LLMs embedded into tools. They don’t need to be as compressive so more easily self hosted. For more complex tools, they can tie together search, database queries, reporting, make it easier to find a setting you don’t know their terminology for.

I’ve had some luck self-hosting a small ai to interpret natural language voice commands for home automation

[–] epicshepich@programming.dev 2 points 2 weeks ago

Yeah, all of your use-cases are what I see as positive use cases for LLMs. I've got an Ollama instance hooked up to Home Assistant, but it does not work very well haha. Haven't had the time to troubleshoot it.

load more comments (3 replies)
[–] flamingos@feddit.uk 4 points 2 weeks ago (2 children)

Modern vocaloid is generative AI and I think making a song with Hatsune Miku is justifiable.

load more comments (2 replies)
[–] kreynen@kbin.melroy.org 4 points 2 weeks ago

@58008@lemmy.world I recently read a developer compare AI to lead pipes or asbestos... something that seemed cost effective at the time, but ended up being a REALLY bad idea. Communities are already realizing that the power and water required for this are not compatible with human life in the same place and the market reflecting the cost of increasing electric production.

Being "off grid" was something only peppers did, but as connection fees increase and battery technology improves, it makes less financial sense to keep residential homes connected to subsidize data center consumption.

Elon's work around for the lack of cheap electricity for his data centers has.been methane. While the US is a top methane producer, the next 3 countries are Russia, Iran and China. The cost of methane is impacted by global conflict the same way gasoline prices are.

While the efficiency of data centers will increase, so will awareness of the impact these facilities have on the places they are built and the toxic ewaste they generate driving up their costs.

[–] Th4tGuyII@fedia.io 4 points 2 weeks ago

In its current state, no... Saying its terrible for the environment and wider economy is an understatement, and the tech industry is so desperate to recoup their money on AI that they've allowed it to work its way into everything - often enshittifying things in the process.

When this bubble eventually reaches its limit and bursts, I imagine these AI tools will be forced to recede into their actually useful and profitable niches - and that's when they'll start being justifiable to me at least.

[–] MerryJaneDoe@lemmy.world 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

It's not ready for commercial use by the general public.

We see this ALL the time in America - a new disruptive technology emerges. We jump all over the benefits and the profits without regard to consequences or expense. We suffer.

New cheap pesticide? Hell yeah, spray that DDT everywhere, it's super effective! (Insert other endless examples here, from microplastics to asbestos.)

AI (and information technology in general) has shown itself to be a danger to human beings. Its effects are not felt so much in the short term (5 or 10 years) but generationally. We've seen that information technology has already impacted quality of life. It's used as spyware, as a tool to collect and correlate massive amounts of data. It's used to shape our media experience, our purchasing, our social circles. There are great things, like online banking. But they seem more and more to be outweighed by a loss of humanity. So much misinformation that I question my own reality some days.

What we call "AI" is the evolution of these obtrusive, coercive practices. It exists purely to replace human thinking skills. I've spent a bit of time in r/teachers over the last 15 years, and the stories keep getting worse. The rise of AI means that detecting plagiarism/cheating is exponentially more difficult. But, more importantly, the kids don't have any stress when it comes to cheating. They don't have to find a friend or know the bare minimum. They can just...cheat. And they never learn to problem solve or overcome adversity.

None of this matters, though. Ready or not, here we are. A new kind of slavery for a new world order.

[–] ImmersiveMatthew@sh.itjust.works 3 points 2 weeks ago (1 children)

You raise many good points, but social media also has benefits and is not all just negative. Same with AI and all tech. We are better off overall with tech despite the downsides which we should be doing a better job of mitigating.

load more comments (1 replies)
[–] irelephant@lemmy.dbzer0.com 3 points 2 weeks ago (3 children)

I find llms useful for some things,

Sometimes the duckduckgo ai summary of results can include a source I was looking for that was buried in the results (I don't trust the ai much since it literally cited Ireland's Nazi party uncritically once, but it does link to sources). Formatting text is also really useful, or turning something from csv to a markdown table and vice versa.

Its also able to extract text from images a lot better than "dumb" ocrs (which I still use for basic images), and can format it in a certain way (e.g., take this screenshot from an ebook and format it as a quizlet).

I have try seeing what they're like for programming every once in a while, and my verdict is still shit. They can do very basic stuff thats basically regurgitating functions that were written before, but not original stuff (they're very poor at making regexes). They're similarly bad at debugging, though they can sometimes point you in the right direction.

The response from ai-bros for the programming thing is usually something along the lines of how you should try Claude opus, but I am never paying a cent on any ai thing. At that point it's easier to just use my brain.

load more comments (3 replies)
[–] Korhaka@sopuli.xyz 2 points 2 weeks ago

I'll use it at work to do stupid tasks from HR that are not worth my time. I won't verify the outputs because it isn't important.

For important things, no I don't really use it. Got a few locally hosted but don't really use them

[–] Corporal_Punishment@feddit.uk 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I work in the UK public sector and often have to respond to complaints from people who have written a 4 page rant without any punctuation.

Copilot is amazing for taking that 4 page long rant and reducing it down to something I can actually respond to.

I don't use copilot for drafting the reply, I do that myself. But I'll use copilot as a proof reading tool.

As far as I'm concerned I'm responsible for creating content and the AI helps tweak it

[–] starlinguk@lemmy.world 4 points 2 weeks ago (2 children)

Don't use AI as a proofreading tool. Proofreading should be done by humans.

load more comments (2 replies)
[–] Grail@multiverse.soulism.net 2 points 2 weeks ago
[–] Donebrach@lemmy.world 2 points 2 weeks ago
[–] entropiclyclaude@lemmy.wtf 2 points 2 weeks ago* (last edited 2 weeks ago)

I think we should be building localized, smaller, more finely-tuned LLMs.

  1. They wouldn’t require data centers.
  2. They would be forced to become more energy efficient or resource aware because they add costs to organizational profit margins - forcing innovation and creativity instead of throwing data centers and billionaires at the problem.

I used AI to help with debugging and coding, as well as exploring a theory I came up with a long time ago - and with my framework and notes and research papers and everything else I’ve collected to support my theory, I was able to put it into application with my own AI cybersecurity I’ve developed.

We’ve created 26,000 new cyber threat datasets because I had access to an LLM that could help me take the frameworks, notes, and research I’d gathered in my attempts to build this out and within a couple months I had something that blew my prototype out of the water.

  • there is a lot of value in these LLMs. What I’ve been exploring is on-hardware AI. Not a friend. Not a chatbot. A program that does what it’s supposed to and that’s it.

My startup in cybersecurity- we use less than 1GB of ram, at peak use maybe 30% of a single cpu core, and it was build with ethics and safeguards in mind. Not LLM but real Machine + reinforcement learning.

To me ethics also meant resource awareness. If I’m poisoning the planet and the people then it’s not a good product.

Building smaller, more specialized local models is not only better from a cybersecurity perspective, but smaller local LLMs mean new startups to build them, a race to innovate and improve resource usage, more data privacy, smaller attack surface, no obscenely expensive API calls and overage fees…

What we should have is a Symbiotic approach to AI - a partnership sort of understanding.

LLMs helped me with debugging and putting this research and theory together. And in a fraction of the time it took me to build the framework.

I pushed autonomous operation because I felt that it was about giving people their time back. Providing freedom. If my cybersecurity can take care of 94.1% of all threats before they reach an analyst - that analyst doesn’t have to wake up at 2AM to sift through 10000 false positives. We do it.

Now that analyst can do what they got a degree to do - actually defend a network. Build and explore threat research and databases. Find their purpose again.

We require that a human is always in the loop and help protect cybersecurity jobs by ensuring that all human input is always the final decision. Let our AI do the heavy lifting so you can take care of this shit that matters and what you really want to do.

Sorry I think my adhd took control of this conversation.

[–] Atomic@sh.itjust.works 2 points 2 weeks ago* (last edited 2 weeks ago)

I've always said I think it's fine in filler content, it can allow small teams to quickly populate their world with background stuff that you never notice. Except when it's not there.

But with great power comes great responsibility. And I don't necssesarily think most can handle that.

load more comments
view more: ‹ prev next ›