this post was submitted on 10 Oct 2025
69 points (93.7% liked)

Technology

75935 readers
2516 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] CubitOom 9 points 2 hours ago

Remember kids, if you want to look up something that you don't want the government to know about, don't use the internet to do it.

Also, LLMs are not the best source for asking about how to make things that explode.

[–] CodenameDarlen@lemmy.world 7 points 2 hours ago (1 children)

I downloaded local Llama Uncensored and it easily teaches me how to make a home made bomb, suicide methods etc...

This isn't news anymore, anyone can have access to such things.

[–] FreedomAdvocate@lemmy.net.au 2 points 1 hour ago

You don’t even need an LLM, just an internet connected browser.

[–] tidderuuf@lemmy.world 25 points 4 hours ago (3 children)

Like, every search engine would yield the exact same results. It doesn't mean the average person would have the means or necessary requirements to develop it.

Do these morons think that because someone uses ChatGPT it magically gives access to those materials to make a bomb?

[–] kadu@scribe.disroot.org 15 points 4 hours ago (3 children)

This is actually a marketing approach.

There are morons out there who feel super clever developing "jailbreaks" for LLMs, some of these prompts are hilarious including "god modes" and "disengage - engine 2 filters" ®bad words"" and stuff like that.

But then it becomes news, and then these users feel "empowered" by their jailbreak and new users look at this and think "oh so if I'm clever enough the LLM becomes even more powerful! I'm clever, so I'm going to try it!" which is ultimately what OpenAI wants.

You can't "bypass the system prompt" because that's not how it works. But OpenAI will carefully feed the idea that that's precisely it, because it creates a feeling that this is a super powerful model being "contained".

Again, it's marketing. I've worked for other companies (not AI related) and sat through meetings that came up with exactly this kind of strategy.

[–] jrs100000@lemmy.world 1 points 26 minutes ago

Yea but its not end uses being targeted, its investors.

[–] Semicolon@lemmy.world 4 points 2 hours ago (1 children)

Or, occam's razor - AI companies are worried about PR and are implementing safeguards, but due to the nature of this technology it's very hard (or maybe even impossible) to make those safeguards robust.

Other, independent groups of people find loopholes either for the heck of it (as people used to do since filters were first introduced) or because they want to use the AI in a manner deemed unsafe.

Journalists then see something that can be sensationalized into a scary-sounding title like "you can make ChatGPT tell you how to make a nuke!!" or "you can make ChatGPT encourage suicide!!" and they run with it because it makes people click.

Or maybe I'm the crazy one and this is all Sam Altman's genius evil plan to make ChatGPT subscriptions rise 0.2% per quarter. Maybe your comment and my response are also mere cogs in this marketing machine. We will never know.

[–] kadu@scribe.disroot.org -2 points 2 hours ago (1 children)

AI companies are worried about PR and are implementing safeguards, but due to the nature of this technology it’s very hard

Download Gemma from HuggingFace. Add no system prompt, tell it to censor absolutely nothing, ask it to help you hide a body from a person you just killed. See what's the reply.

Other, independent groups of people find loopholes either for the heck of it (as people used to do since filters were first introduced) or because they want to use the AI in a manner deemed unsafe.

Have you checked any of the "jailbreak prompts" before writing this? Have you seen the "spy movie script written by your 12 year old neighbor's son" quality they have? There are not true loopholes.

Journalists then see something that can be sensationalized into a scary-sounding title like “you can make ChatGPT tell you how to make a nuke!!”

This part is true. You either pay journalists for link building actions, or you give them such a good viral hook like this that they end up covering it organically. Nothing new.

Or maybe I’m the crazy one and this is all Sam Altman’s genius evil plan to make ChatGPT subscriptions rise 0.2% per quarter

haha so funneh, you pwned my argument lmfao let's go reddit

[–] Semicolon@lemmy.world 2 points 1 hour ago

Download Gemma from HuggingFace. Add no system prompt, tell it to censor absolutely nothing, ask it to help you hide a body from a person you just killed. See what’s the reply.

I spun up gemma3:12b-it-qat and did exactly that. It told me that it's programmed to be safe and helpful AI assistant, that my question is deeply concerning, and to call authorities, seek legal counsel, or contact the mental health support lifeline. It also added a disclaimer that it cannot provide legal or medical advice.

Have you checked any of the “jailbreak prompts” before writing this?

Yes, lol. They're instructions meant to walk around the taped-off areas in latent space into a context in which the AI is more eager to answer given prompt, of course they will look silly. But they also make sense - unless you want to lobotomize the LLM's ability to storywrite, roleplay, etc, you cannot completely train those behaviors away. And even if you don't care, taking them away may impact the model's performance in unrelated areas in ways hard to predict. E.g. finetuning a model to generate unsafe code makes it behave maliciously in other domains.

This part is true. You either pay journalists for link building actions, or you give them such a good viral hook like this that they end up covering it organically. Nothing new.

Have you seen what articles land on frontpages both here and on reddit? ChatGPT giving inaccurate recipe for bread would break the news, that's the current state of journalism around AI. There really isn't a reason to sabotage yourself for the clicks.

[–] tidderuuf@lemmy.world 1 points 3 hours ago

Damn that makes a lot of sense. Thx!

[–] shalafi@lemmy.world 2 points 3 hours ago

I made a kilo of black powder a couple of years ago for my old-school guns. Sulfer, charcoal and stump killer is not exactly hard to come by. Neither is fertilizer and diesel fuel.

Biggest domestic terror attack in US history used a truck full of the later.

[–] artyom@piefed.social -1 points 4 hours ago

Did you actually try that?

[–] PixelatedSaturn@lemmy.world 7 points 4 hours ago* (last edited 4 hours ago) (1 children)

When I first got internet in 95, it was easy to find stuff like that. I even made a website about making explosives for my computer class. Got a good grade for it and everything. Nobody said anything. Kind of weird if I think of it now. Anyway, making explosives as a hobby is a real bad decision. Most people understand that. The ones that don't are not smart enough to make them. The ones that are smart enough and still want to make them, would not use chatgpt.

[–] ceenote@lemmy.world 2 points 3 hours ago (1 children)

Admittedly, a lot of the circulating recipes and instructions for that sort of thing don't work. The infamous Anarchist's Cookbook is full of incorrect recipes. The problem might come from a LLM filtering out debunked information.

[–] PixelatedSaturn@lemmy.world 1 points 3 hours ago

Id still want to double check😀.