this post was submitted on 25 Jul 2025
568 points (98.5% liked)

Technology

73534 readers
2568 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] forrgott@lemmy.sdf.org 10 points 1 week ago (6 children)

Well, in practice, no.

Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

[–] Jozav@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (4 children)

No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

[–] forrgott@lemmy.sdf.org -1 points 1 week ago (3 children)

So many examples of this method failing I don't even know where to start. Most visible, of course, was how that approach failed to stop Grok from "being woke" for like, a year or more.

Frankly, you sound like you're talking straight out of your ass.

[–] Jozav@lemmy.world 2 points 1 week ago (1 children)

Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

BTW. There are many theories about Grok's unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.

[–] jumping_redditor@sh.itjust.works -2 points 1 week ago (1 children)

why should any llm care about "ethics"?

[–] MouldyCat@feddit.uk 2 points 1 week ago

well obviously it won't, that's why you need ethical output restrictions

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)