Yeah, it's not stopping me from commenting. I'm only noting the downvotes in this case because I was making a point elsewhere in the thread about the extremely anti-AI sentiment around here. In this case I'm not even saying something positive about it, merely speculating about the reason why Microsoft is doing this, and I guess that's still being interpreted as "justifying" AI and therefore something worthy of attack.
FaceDeer
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.
It understands young and old. That means it knows a kid is not just a 60% reduction by volume of an adult.
We know it understands these sorts of things because of the very things this whole kerfuffle is about - it's able to generate images of things that weren't explicitly in its training set.
I was asked what the reason for this function was, so I speculated on that reason in an attempt to answer the question, and I got downvoted for it.
I wasn't addressing the privacy concerns at all. That wasn't part of the question.
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.
Do a Google Image search for "child" or "teenager" or other such innocent terms, you'll find plenty of such.
I think you're underestimating just how well AI is able to learn basic concepts from images. A lot of people imagine these AIs as being some sort of collage machine that pastes together little chunks of existing images, but that's not what's going on under the hood of modern generative art AIs. They learn the underlying concepts and characteristics of what things are, and are able to remix them conceptually.
This thread isn't about websites, it's about functions built into operating systems. Those are generally much more configurable. Microsoft wants corporations to run Windows, after all, and corporations tend to be very touchy about this sort of thing.