Probably how people felt who were against the development of the printing press or internet. Its a good tool. Often used wrong but a good tool if used right and with humans actually checking and fixing the results. It shouldnt replace art too much though since that is something people actually enjoy.
Microblog Memes
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
I hate LLMs for everything except summing up those endless Genshin quest dialogues. I want to know roughly what the quest is about, but not going to read/listen to hours of extremely dry and stilted thesaurus-wanking. But I'd never use it for anything where I actually care about the accuracy of the output.
I gotta be honest, I'm neither pro nor anti AI myself. I don't use it as much as I used to these days, but when I do use it, it can be pretty fun and helpful. And I can't help but admire the AI images and videos, even if it is AI slop. (Maybe I'm an idiot for being very easily impressed/entertained by almost anything.)
Yes I know there's a bunch of problems with it (including environmental), but at the same time, I don't feel like I'm contributing to those problems, since I'm just one person, and there's so many other people using it anyway.
It's a tool being used by humans.
It's not making anyone dumber or smarter.
I'm so tired of this anti ai bullshit.
Ai was used in the development of the COVID vaccine. It was crucial in its creation.
But just for a second let's use guns as an example instead of ai. Guns kill people. Lemmy is anti gun, mostly. Yet Lemmy is pro Ukraine, mostly, and y'all supports the Ukrainians using guns to defend themselves.
Or cars, generally cars suck yet we use them as transport.
These are just tools they're as good and as bad as the people using them.
So yes, it is just you and a select few smooth brains that can't see past their own bias.
I can't take anyone seriously that says it's "trained on stolen images."
Stolen, you say? Well, I guess we're going to have to force those AI companies to put those images back! Otherwise, nobody will be able to see them!
...because that's what "stolen" means. And no, I'm not being pendantic. It's a really fucking important distinction.
The correct term is, "copied" but that doesn't sound quite as severe. Also, if we want to get really specific, the images are presently on the Internet. Right now. Because that's what ImageNET (and similar) is: A database of URLs that point to images that people are offering up for free to anyone that wants on the Internet.
Did you ever upload an image anywhere publicly, for anyone to see? Chances are someone could've annotated it and included it in some AI training database. If it's on the Internet, it will be copied and used without your consent or knowledge. That's the lesson we learned back in the 90s and if you think that's not OK then go try to get hired by the MPAA/RIAA and you can try to bring the world back to the time where you had to pay $10 for a ringtone and pay again if you got a new phone (because—to the big media companies—copying is stealing!).
Now that's clear, let's talk about the ethics of training an AI on such data: There's none. It's an N/A situation! Why? Because until the AI models are actually used for any given purpose they're just data on a computer somewhere.
What about legally? Judges have already ruled in multiple countries that training AI in this way is considered fair use. There's no copyright violation going on... Because copyright only covers distribution of copyrighted works, not what you actually do with them (internally; like training an AI model).
So let's talk about the real problems with AI generators so people can take you seriously:
- Humans using AI models to generate fake nudes of people without their consent.
- Humans using AI models to copy works that are still under copyright.
- Humans using AI models to generate shit-quality stuff for the most minimal effort possible, saying it's good enough, then not hiring an artist to do the same thing.
The first one seems impossible to solve (to me). If someone generates a fake nude and never distributes it... Do we really care? It's like a tree falling in the forest with no one around. If they (or someone else) distribute it though, that's a form of abuse. The act of generating the image was a decision made by a human—not AI. The AI model is just doing what it was told to do.
The second is—again—something a human has to willingly do. If you try hard enough, you can make an AI image model get pretty close to a copyrighted image... But it's not something that is likely to occur by accident. Meaning, the human writing the prompt is the one actively seeking to violate someone's copyright. Then again, it's not really a copyright violation unless they distribute the image.
The third one seems likely to solve itself over time as more and more idiots are exposed for making very poor decisions to just "throw it at the AI" then publish that thing without checking/fixing it. Like Coca Cola's idiotic mistake last Christmas.
Why not write this with pen and paper?
It trains your brain even more than typing, it is impossible to be used to train any AI, it uses no electricity compared to the massive amounts a computer uses, and I don't have to read your dumb takes.
Seriously, I know corporate bullshit are using AI to do dumb things. But it is a fascinating technology that can do a lot of neat things if applied correctly.
Stop claiming like AI shit in your shoes and fucked your grandma. It isn't going to burst and go away.