Geok

joined 1 month ago
[–] Geok@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Wow, okay. So I guess my conclusion in that case would be that the filter is not very strong. So then it's pretty easy for people to generate illegal stuff consistently if they try and probably also pretty possible for people to occasionally generate stuff that is illegal by accident?

[–] Geok@lemmy.world 1 points 1 month ago (2 children)

Got it. What about questionable words, like words that could be valid descriptors of legal images and also illegal ones. If someone typed:

teen (some sexual stuff, eye color, other irrelevant stiff, etc) then they might be referring to 18-19 (c.f. the "teen" category on legitimate porn sites) or the might be looking for 13-17 stuff.

So would the generator

a) block this prompt because of the key word teen? b) allow the prompt but edit "teen" to "18-19"? c) allow the prompt as is but the AI would make sure to generate something 18+ essentially because it knows anytime it makes something sexual it is supposed to be 18+? d) randomly generate an image that could be anywhere from 13-19?? (the distribution would probably skewed to the higher age end just because there is more training data matching that collection of words) but still!

[–] Geok@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Thanks for the idea but I don't want to type in a questionable prompt to figure out if the filter works because I don't know if the filter works... I also don't want to type in a bona fide prompt and create 100 images if I don't yet know whether it randomly creates illegal images (this is more of a hypothetical--I'm not really that afraid of that happening.)

I know that it allows NSFW content after age verification. I am curious how well it works (or how well it is intended to work) to prevent both intentional and accidental illegal content creation. Florida law for instance criminalizes images of “sexual activity” involving fake individuals that appear under 18 to a “reasonable” person. So ideally the platform would be preventing such images from being able to be created intentionally, or at least make it sufficiently difficult such that criminal intent is required to produce them.

[–] Geok@lemmy.world 1 points 1 month ago

I think federal law says about the same thing in different words. And most state laws are some other variation of it.

[–] Geok@lemmy.world 1 points 1 month ago

@perchance@lemmy.world

[–] Geok@lemmy.world 1 points 1 month ago (1 children)

It would be great to hear from a mod or developer on this as well, thanks! Again I am not trying to make any accusations, just want to know how things are done.

[–] Geok@lemmy.world 1 points 1 month ago (4 children)

That's good. I would hope there is at least some filter. I'm interested how exactly exactly it works and how stringent it is. Obviously it is designed to allow NSFW content but not illegal content which I think is a problem that takes at least a decent amount of effort to do well.

[–] Geok@lemmy.world 0 points 1 month ago* (last edited 1 month ago) (2 children)

Thanks for the insight. You're saying that a potentially illegal prompt would be flagged/not result in an image at all or that the algorithm would actually modify the prompt so that it conforms to requirements and then generates the image using some stable diffusion product (which probably should have its own filters)??

I don't see how this solves the age deviation problem though. The age deviation thing comes into play because I'm not sure how well what the AI thinks looks 18 aligns with what a "reasonable person" thinks is 18. It may align on average but there will be outliers where the Ai tries to create something that looks 18 that people might think looks 17. Obviously this is a pretty hard problem and the law is pretty vague so it makes it difficult.