this post was submitted on 24 Jul 2025
160 points (95.5% liked)

Technology

324 readers
149 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bitcrafter@programming.dev 1 points 1 week ago (1 children)

Supposedly the following is a real problem:

For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy.

Given the administration's immense reputation for veracity—the biggest we have ever seen—I see no reason to doubt this.

[–] Rhaedas@fedia.io 9 points 1 week ago (1 children)

GOP is all about free market, right? So let the free market decide who wants a model that gives out information that is weighted in one direction or another. If you want accuracy, you aren't going to buy into a model that skews things in a different direction. Oh right, they only talk about free market when it works in their best interests...

[–] bitcrafter@programming.dev 2 points 1 week ago (1 children)

This executive order only sets policy for AI use by the federal government, not for how AIs must behave for the entire country.

[–] floofloof@lemmy.ca 2 points 1 week ago* (last edited 1 week ago) (1 children)

Still, it takes a lot of work and resources to design and train a model, so American AI companies may self-censor everywhere so that they don't have to do the work twice, once for the US Government and once for general use.

Hopefully they'll just wrap uncensored models in additional filters when they're serving the US Government, or add an instruction to answer as a Nazi would, and the rest of us can avoid those Nazified versions. But I don't trust the AI techbros.

[–] bitcrafter@programming.dev 2 points 1 week ago

I agree that is a legitimate concern, though I would hope that even in that case there would be less popular alternative models that people could use, just like how those of us who want to stay away from the big social networks can use Lemmy. This would not save us from AI chatbots subtly reprogramming the population just like how Facebook did with its algorithm, though...