this post was submitted on 05 Jul 2024
50 points (98.1% liked)

art

22235 readers
22 users here now

A community for sharing and discussing art, aesthetics, and music relating to '80s, '90s, and '00s retro microgenres and also art in general now!

Some cool genres and aesthetics include:

If you are unsure if a piece of media is on theme for this community, you can make a post asking if it fits. Discussion posts are encouraged, and particularly interesting topics will get pinned periodically.

No links to a store page or advertising. Links to bandcamps, soundclouds, playlists, etc are fine.

founded 5 years ago
MODERATORS
 

cross-posted from: https://midwest.social/post/14150726

But just as Glaze's userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze's protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze's protections could be "easily bypassed, leaving artists vulnerable to style mimicry."

you are viewing a single comment's thread
view the rest of the comments
[–] KobaCumTribute@hexbear.net 16 points 1 year ago* (last edited 1 year ago)

The big issue with all these data-poisoning attempts is that they're all just introducing noise via visible watermarking in order to try to introduce noise back into what are effectively extremely aggressive de-noising algorithms to try to associate training keywords with destructive noise. In practice, their result has been to either improve the quality of models trained on a dataset containing some poisoned images because for some reason adding more noise to the inscrutable anti-noise black box machine makes it work better, or to just be completely wiped out with a single low de-noise pass to clean the poisoned images.

Like literally within hours of the poisoning models being made public preliminary hobbyist testing was finding that they didn't really do what they were claiming (they make highly visible, distracting watermarks all over the image and they don't bother training algorithms as much as claimed or possibly even at all) and could be trivially countered as well.