Jack_Fosset

joined 1 week ago
[–] Jack_Fosset@lemmy.world 1 points 4 days ago

By now I looked into many tutorials and reddit posts and civitas pictures for inspiration but they mosty generate single character or two at best, mostly without carrying which prompt each character gets, and even of there is some picture with multiple characters someone post as example, when I read the prompt there is no proper description of them, so it was just a random seed generation they got, In general I start to think that Differentiate prompts for different characters is something the AI is not good at, and it is not the problem with how I write the prompt but in general how it works and how it learned, and thing like interactions between the characters and their positional relationship is something that AI from the prompt almost never understands and just mix it even worse than prompts.

[–] Jack_Fosset@lemmy.world 1 points 4 days ago (1 children)

In my experience inpainting looks like a part from a different image was snapped in and doesn’t blend well with the original in most of the cases

[–] Jack_Fosset@lemmy.world 1 points 6 days ago

Interesting is that this works quite well in perchance but if I try to put what you wrote in prompt in comfyUI with standard text-to-image workflow regardless of a base mode used (SD 1.5, SD 3, SDXL 1, Pony, Ilustrious) result is an absolute mess, it generates on average from 2 to 5 characters, dresses, prompts and races are utterly random, I wonder what magic is used in perchance that results are so much accurate to prompt

[–] Jack_Fosset@lemmy.world 1 points 1 week ago

Thanks I will try it

[–] Jack_Fosset@lemmy.world 1 points 1 week ago

Thank, I never heard of Flux.1, I will have a look, but that's for time when I learn more with comfyUI in general, so far I liked the simplicity of perchance, but I think I am starting to hit its limitations

[–] Jack_Fosset@lemmy.world 1 points 1 week ago (2 children)

Thanks I will try to play with it, are these BREAK, parenthesis () colon : semi-colon ; and dot . some universal keywords and characters or that is just a thing that seems to work with perchance?

[–] Jack_Fosset@lemmy.world 1 points 1 week ago (2 children)

Can you be more specific I tried inpainting and image to image but all of that generates completely new images or parts, totally ruining and changing the input image

 

When I am describing a scene in the prompt of https://perchance.org/ai-text-to-image-generator for several elements lets say three persons, I want to describe their characteristics and give them some features for example three women, first women has blonde hair, glasses and neck tattoo, second woman is brunette wearing pink hat and smoking and third woman is redhead with long hair, eyebrow piercing and sticks her tongue out What AI does is to mix this description between all the characters, I tried to add thing like does not smoke, does no wear glasses ect. to description of each character but that didn't have much effect. Any idea how to "border" description for each element?

[–] Jack_Fosset@lemmy.world 1 points 1 week ago (2 children)

these flaws I referred to has nothing to do with any mythology, they are simply some errors AI make, likely due to not understanding anatomy and thus generating in some cases what its perceive as hands, but anyway thanks for ComfyUI I will try to have a look

[–] Jack_Fosset@lemmy.world 1 points 1 week ago

perchance.org/ai-text-to-image-generator

 

Hi my first post, sorry if not in right part of the forum or something. Anyway do you know if it is possible to iterate over the same picture AI already generated, lets say I really like the picture, but there is some flaw in it, like AI for example generated some weird hands dismorfia as it often does, and I would like to tell it to try to fix it or add some specific detail that is much better to describe at already generated picture than try to explain on picture that is not generated yet. So any way to do the iteration over the already generated picture?