cloudskater

joined 1 month ago
[–] cloudskater@piefed.blahaj.zone 3 points 3 weeks ago (1 children)

I like your thought process, but struggle to agree with you because I don't believe there is a way for this kind of technology to be ethical without being rebooted (haha) from the ground up. Even so, about the only thing I think it should be used for is double checking what humans have already done, or perhaps printing text from human speech for quick and dirty subtitles. However, all that is assuming that the data it is working off of was not stolen, and that it is not able to "generate" anything "new" because that's just theft and exploitation.

[–] cloudskater@piefed.blahaj.zone 6 points 3 weeks ago (3 children)

Right. Sorry, I'll try to make the post clearer bc I'm not trying to mislead anyone, I'm just so upset that this is even being entertained by Linux devs, it's boiling my blood.

[–] cloudskater@piefed.blahaj.zone 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

That's not the whole issue.

[–] cloudskater@piefed.blahaj.zone 5 points 3 weeks ago (6 children)

Linux, of all projects, should be opposed to these kind of things as a whole. Sure, we could argue it's not as bad, but I'm not comforted by that statement. The fact that those in charge don't see or care about the obvious problems is shocking.

Fuck yeah that's awesome! Wishing you the best of luck and no off-by-one errors haha

[–] cloudskater@piefed.blahaj.zone 2 points 3 weeks ago (1 children)

Yeah I... Okay let's take automatic subtitle creation for instance. That existed well before the LLM bs and was fine. Plus, the stuff they're calling AI isn't pretending to "create" anything, it's just automating some repetitive tasks or using pattern recognition to move an effect across the frame. If we were to continue to criticize them (which would be fair), I would say that they should not even entertain software made by companies like OpenAI, even if they were fully transparent. However, I don't think it's on the same level as including or supporting anything that claims to be generative.

I won't be installing the AI stuff either, just to be safe, but my issues with the things they replied to me with are the sources of the code they're using and it's potential to be exploitative, not the actual uses of the software. Does that make sense?

Could be soup for all we know :p

[–] cloudskater@piefed.blahaj.zone 6 points 3 weeks ago (1 children)

That is one strong ear

Followed by a sneering, rude as hell comment that's actually praising them.

view more: ‹ prev next ›