this post was submitted on 27 Nov 2025
2 points (56.2% liked)

Stable Diffusion

5222 readers
19 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 2 years ago
MODERATORS
 

I'm interested in creating art for a book, like art that goes alongside the text.

One of the difficult parts of this is that I want characters in the book to have a stable look and not change from image to image.

Is there a way to do this? I have experimented with different localized models and often there were artifacts and I couldn't get consistent results. I am mildly intelligent with running local models, but I am neither an expert nor a computer genius. I was able to do something like "character is pretty and tall with black hair" but each time that anything was generated, the character would look different.

It's been about a year since I last tried anything and since then technology has progressed. If I can't get the characters to look consistent from picture to picture, I'd rather not have images, as I can't afford an illustrator.

you are viewing a single comment's thread
view the rest of the comments
[–] nicgentile@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Eyeballing here. I'm a learner and have hardly used this.

Would a checkpoint model (right term?) achieve consistency, built from a specific set of pictures?

[–] tal@lemmy.today 3 points 3 weeks ago* (last edited 3 weeks ago)

If you have or can create a LoRA trained on images of the character you're presenting, that may be helpful. Or if you have a checkpoint model trained on that character. Would be like having a character that the base model is trained on.

[–] Even_Adder@lemmy.dbzer0.com 2 points 3 weeks ago (1 children)

You might be thinking of a LoRA. LoRA are adapters you use with larger models to help it generate whatever concept you want.

[–] nicgentile@lemmy.world 2 points 3 weeks ago

Right. That. Thanks.