this post was submitted on 26 Oct 2023
189 points (99.5% liked)

chapotraphouse

13473 readers
1 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Vaush posts go in the_dunk_tank

Dunk posts in general go in the_dunk_tank, not here

Don't post low-hanging fruit here after it gets removed from the_dunk_tank

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] StellarTabi@hexbear.net 32 points 2 years ago (3 children)

Unrelated, but I predict there will be more false accusations of AI generated news images than actual misinformation in the near future.

[–] BodyBySisyphus@hexbear.net 19 points 2 years ago

Love to live in the era of epistemic breakdown

[–] invalidusernamelol@hexbear.net 12 points 2 years ago (1 children)

They'll claim it, but it's actually still easy to determine if an image is AI generated with minimal effort.

Legitimate images will have a source and knowing the source will allow you to validate things like meta data and location/time the image was taken.

AI is really only useful for entirely synthetic images.

[–] Sphks@lemmy.dbzer0.com 7 points 2 years ago (1 children)

Average people don't care. Otherwise, Fox News would not exist.

[–] invalidusernamelol@hexbear.net 4 points 2 years ago

That's a bit of a misanthropic viewpoint. Sure people will believe what they want, but AI images aren't going to convince anyone who wasn't already convinced, and they definitely will never serve as anything more than very temporary smokescreens that instantly betray the legitimacy of whoever uses them.

[–] drhead@hexbear.net 7 points 2 years ago* (last edited 2 years ago)

It's already that way, from what I can tell.

AI classifier models are garbage. Most of them are only particularly good at identifying images processed through a specific model's autoencoder, which if you don't specifically try to mask that (which is possible) they have a fairly high recall rate on. They have MASSIVE false positive rates though with a variety of known and unknown triggers, in particular I've seen a lot of images which upon closer inspection looked plausibly real if you consider how fucking awful postprocessing on some cameras can be.

And it's not even images that would make sense to AI generate that people are pulling this on. I would think that you would ~~pull the AI generated card on~~ AI generate propaganda images of something that is incredibly damning yet also hard to disprove. But most of the claims for "AI-generated" propaganda images I see are over things that don't really prove the claim the propagandist is trying to make, or that don't even show anything particularly abnormal. That's more than just falsely assuming something, that's just outright failing to understand how propaganda works in the first place which is a much more serious problem.