FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io -3 points 8 months ago (3 children)

How dare you say something insufficiently negative about the stuff everyone hates.

[–] FaceDeer@fedia.io 9 points 8 months ago

Because they know their audience.

[–] FaceDeer@fedia.io 13 points 8 months ago (5 children)

As I recall, Baldwin wasn't charged simply because he pulled the trigger. He was also a producer and so was involved in hiring the armorer in the first place.

I don't have an opinion on how the case should have turned out, it's just not so silly to bring the charges as is commonly assumed.

[–] FaceDeer@fedia.io 3 points 8 months ago

Alberta may be conservative by Canadian standards, but I think they'll end up as a collection of blue states if you did that. America is quite far to the right ward on average.

[–] FaceDeer@fedia.io 1 points 8 months ago

You have to copy something before copyright law applies.

[–] FaceDeer@fedia.io 2 points 8 months ago

Of course it's not clear-cut, it's the law. Laws are notoriously squirrelly once you get into court. However, if you're going to make predictions one way or the other you have to work with what you know.

I know how these generative AIs work. They are not "compressing data." Your analogy to making a video recording is not applicable. I've discussed in other comments in this thread how ludicrously compressed data would have to be if that was the case, it's physically impossible.

These AIs learn patterns from the training data. Themes, styles, vocabulary, and so forth. That stuff is not copyrightable.

[–] FaceDeer@fedia.io 2 points 8 months ago

Of course it's silly. Of course the images are not statistically independent, that's the point. There are still people to this day who claim that stable diffusion and its ilk are producing "collages" of their training images, please tell this to them.

The way that these models work is by learning patterns from their training material. They learn styles, shapes, meanings. None of those things are covered by copyright.

[–] FaceDeer@fedia.io -1 points 8 months ago

If you cut up the book into paragraphs, sentences, and phrases, and rearranged them to make and sell your own books, then you are likely to fail each of the four tests.

Ah, the "collage machine" description of how generative AI supposedly works.

It doesn't.

But even if you manage to cut those pieces up so fine that you can't necessarily tell where they come from in the source material, there is enough contained in the output that it is clearly drawing directly on source material.

If you can't tell where they "came from" then you can't prove that they're copied. If you can't prove they're copied you can't win a copyright lawsuit in a court of law.

[–] FaceDeer@fedia.io 4 points 8 months ago

You're probably thinking of situations where overfitting occurred. Those situations are rare, and are considered to be errors in training. Much effort has been put into eliminating that from modern AI training, and it has been successfully done by all the major players.

This is an old no-longer-applicable objection, along the lines of "AI can't do fingers right". And even at the time, it was only very specific bits of training data that got inadvertently overfit, not all of it. You couldn't retrieve arbitrary examples of training data.

[–] FaceDeer@fedia.io 7 points 8 months ago

It's not the only place, but it's particularly loud and forceful about swinging those views around internationally.

[–] FaceDeer@fedia.io 0 points 8 months ago (2 children)

You've got your definition of "derivative work" wrong. It does indeed need to contain copyrightable elements of another work for it to be a derivative work.

If I took a copy of Harry Potter, reduced it to a fine slurry, and then made a paper mache sculpture out of it, that's not a derivative work. None of the copyrightable elements of the book survived.

[–] FaceDeer@fedia.io 0 points 8 months ago (2 children)

In the case of Stable Diffusion, they used 5 billion images to train a model 1.83 gigabytes in size. So if you reduce a copyrighted image to 3 bits (not bytes - bits), then yeah, I think you're probably pretty safe.

view more: ‹ prev next ›