this post was submitted on 27 Nov 2025
2 points (56.2% liked)
Stable Diffusion
5222 readers
17 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
- Stable Diffusion Art (See its sidebar for more GenAI Art comms)
- !aihorde@lemmy.dbzer0.com
Other communities
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My limited experience is that stable characters across a number of images are a weakness today, and I wouldn't be confident that genAI is a great way to go about it. If you want to try it, here's what I'd go with:
If you can get images with consistent outlines via some other route, you can try using ControlNet to do the rest of the image.
If you just need slight variations on a particular image, you can use inpainting to regenerate the relevant portions (e.g. an image with a series of different expressions).
If you want to work from a prompt, try picking a real-life person or character as a starting point, that may help, as models have been trained on them. Best is if you can get them at once point in time (e.g. "actor in popularmovie"). If you have a character description that you're slapping into each prompt, only describe elements that are actually visible in a given image.
I've found that a consistent theme is something that is much more achievable, in that you can add "by " to your prompt terms for any artist that the model has been trained on a number of images from. If you're using a model that supports prompt term weighting (e.g. Stable Diffusion), you can increase the weight here to increase the strength of the effect. Flux doesn't support prompt term weighting (though it's really aimed at photographic images anyway). It's possible to blend multiple artists or genres as prompt terms.
Eyeballing here. I'm a learner and have hardly used this.
Would a checkpoint model (right term?) achieve consistency, built from a specific set of pictures?
If you have or can create a LoRA trained on images of the character you're presenting, that may be helpful. Or if you have a checkpoint model trained on that character. Would be like having a character that the base model is trained on.
You might be thinking of a LoRA. LoRA are adapters you use with larger models to help it generate whatever concept you want.
Right. That. Thanks.