Imagine you have a big box of crayons that can draw almost anything you can think of. This box is like Stable Diffusion, a super-smart AI that creates pictures from your words. But sometimes, you want to draw something very specific, like a unicorn wearing a spacesuit, and your big box of crayons doesn’t have the exact colors or tools to make it look just right.
This is where LoRAs come in! LoRAs are like small packs of special crayons or stickers that you can add to your big box. They don’t replace your crayons—they just give you extra tools to draw specific things better. For example, if you want to draw that unicorn in a spacesuit, you can use a LoRA that knows all about spacesuits or unicorns, and it helps Stable Diffusion make the picture look exactly how you want.
In technical terms, LoRAs (Low-Rank Adaptations) are small, lightweight files that tweak how Stable Diffusion works. They help the AI focus on specific styles, objects, or details without changing the whole system. So, instead of needing a whole new box of crayons, you just add a little extra magic to the one you already have!
Note: I tweaked a few of the pics via inpaint sketch to remove errors (e.g. wrong number of fingers).
EDIT: Never mind. Upon reviewing your post history and modlog, I've noticed a lot of "pick up a pencil"-esque statements that at best, don't really contribute to the conversation and at worst, are inflammatory. While I initially posted assuming good faith, I've concluded that the conversation is unlikely to remain productive, and thus I've blocked you. Take care.
Original comment
Alright, I'll make my point seriously.No, I do not generally agree that AI-generated images are "theft". GiovanH's blog post explains it better than I could - please go read it when you find the time. But a tl;dr is that models aren't simple collage machines - they actually pick up concepts from the images they're trained on, and demonstrate an ability to combine them to output novel ones - not exactly, but pretty similar to how concepts are combined in manually-made art. It's also mathematically impossible for image diffusion models to directly contain their training imagery, due to their small size (SDXL models, for instance, are around 6.5 GB while being trained on billions of images). Of course, there is a small chance of overfitting happening, which the post gets into more detail about.
Also, I don't believe it's meaningful to distinguish whether something has "soul", based on its medium. People can't agree on what a "soul" is, or if it even exists. What can actually be quantified is whether AI art invokes reactions in people - which it most definitely does, whether you find the comics funny, are repulsed by their mediums, or simply shrug at them and move on. Besides, it was a human who prompted to generate the image in the first place.
AI is a new technology, and it's totally OK to be worried about its impact on society. However, I'd say the best way to go about it is to clearly state each other's opinions and skip the buzzwords and assumptions. If you're willing to reply back, feel free, even if you find yourself disagreeing most of the time! As long as we can keep this civil.