this post was submitted on 26 Oct 2023
72 points (100.0% liked)

technology

23218 readers
2 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

:sicko-blur:

you are viewing a single comment's thread
view the rest of the comments
[–] laziestflagellant@hexbear.net 6 points 2 years ago (3 children)

It can at least protect individual artists from having their future work being made into a LORA, which is happening to basically every NSFW artist in existence at the moment.

[–] Ho_Chi_Chungus@hexbear.net 4 points 2 years ago (1 children)

work being made into a LORA,

a what

[–] laziestflagellant@hexbear.net 10 points 2 years ago* (last edited 2 years ago) (1 children)

A LORA is a 'Low rank adaptation' diffusion model, which are generally created for the Stable Diffusion family of image generators. Unlike the main image generator models which use databases of hundreds of millions of images, LORAs can be trained on a database of a few hundred images or even single digit numbers of images.

They're typically used on top of the main Stable Diffusion models in order to get results of a specific subject (ie a LORA trained on a specific anime character) or to get results in specific art styles (ie a LORA trained off of hypothetical porn artist Biggs McTiddies' body of work)

[–] Ho_Chi_Chungus@hexbear.net 7 points 2 years ago (1 children)

porn artist Biggs McTiddies' body of work

guess my hamster has a new last name now

[–] laziestflagellant@hexbear.net 7 points 2 years ago

I didn't even realize you were Biggs' owner oh my god data-laughing

[–] drhead@hexbear.net 2 points 2 years ago

I wouldn't be confident about that. Usually people training a LORA will be training the text encoder as well as the Unet that does the actual diffusion process. If you pass the model images that visually look like cats, are labeled as "a picture of a cat", and that the text encoder is aligned towards thinking is "a picture of a dog" (the part that Nightshade does), you would in theory be reinforcing what pictures of cats look like to the text encoder, and it would end up moving the vectors of "picture of a cat" and "picture of a dog" to where they are very well clear of each other. Nightshade essentially relies on being able to line up the Unet to the wrong spots on the text encoder, which shouldn't happen if the text encoder is allowed to move as well.