hihi24522

joined 2 years ago
MODERATOR OF
[–] hihi24522@lemm.ee 2 points 2 months ago

I work in a lab, so yes, I understand how data science works. However, I think you have too much faith in the people running these scrapers.

I think it’s unlikely that ChatGPT would have had those early scandals of leaking people’s SSNs or other private information if the data was actually “cleared by a human team” The entire point of these big companies is laziness; I doubt they have someone looking over the thousands of sites worth of data they feed to their models.

Maybe they do quality checks on the data but even in that event, forcing them to toss out a large data set because some of it was poisoned is a loss for the company. And if enough people poison their work or are able to feed poison to the scrapers, it becomes much less profitable to scrape images automatically.

I previously mentioned methods for possibly slipping through automatic filters in the scraper (though maybe I mentioned that in a different comment chain).

As for a scraper acting like a human by use of an LLM, that sounds hella computationally expensive on the side of the scrapers. There would be few willing to put in that much effort, fewer scrapers makes DDOS like effect of scraping less likely. It would also take more time which means the scraper is spending less time harassing others.

But these are good suggestions. I suppose a drastic option for fighting a true AI mimicking a human would be to make all links have a random chance of sending any user to the tarpit. People would know to click back and try again, but the AI would at best have to render the site, process what it sees, decide it is in the tarpit, and then return. That would further slow down the scraper (or very likely stop/trap it) but that would make it slightly annoying for regular users.

In any case, at a certain point, trying to tailor an AI scraper to avoid a single specific website and navigate the traps for it would probably take more time and effort than sending a human to aggregate the content instead of an automated scraper

[–] hihi24522@lemm.ee 1 points 2 months ago (2 children)

Oh when you said arms race I thought you were referring to all anti-AI countermeasures including Anubis and tarpits.

Were you only saying you think AI poisoning methods like Glaze and Nightshade are futile? Or do you also think AI mazes/tarpits are futile?

Both kind of seem like a more selfless version of protection like Anubis.

Instead of protecting your own site from scrapers, a tarpit will trap the scraper, stopping it from causing harm to other people’s services whether they have their own protections in place or not.

In the case of poisoning, you also protect others by making it more risky for AI to train on automatically scraped data which would deincentivize automated scrapers from being used on the net in the first place

[–] hihi24522@lemm.ee 1 points 2 months ago (4 children)

With aggressive scrapers, the “change” is having sites slowed or taken offline, being basically DDOSed by scrapers ignoring robots.txt.

What is your proposed adaptation that’s better than countermeasures? Be rich enough to afford more powerful hardware? Simply stop hosting anything on the internet?

[–] hihi24522@lemm.ee 2 points 2 months ago (6 children)

Isn’t that what the arms race is? Adapting to new situations?

[–] hihi24522@lemm.ee 2 points 2 months ago

I guess diversity of tactics probably is a good way to stop scrapers from avoiding the traps we set. Good on you for helping out. Also I like the name lol

On a slightly unrelated note, is rust a web dev language? I’ve been meaning to learn it myself since I’ve heard it’s basically a better, modern alternative to C++

[–] hihi24522@lemm.ee 2 points 2 months ago (8 children)

That’s the intent behind Nightshade right?

Would overlaying the image with a different, slightly transparent image be enough to shift the weights? Or is there a specific method of pseudorandom hostile noise generation that you’d suggest?

I’d imagine the former is likely more computationally efficient, but if the latter is more effective at poisoning and your goal is to maximize damage regardless of cost, then that would be the better option.

[–] hihi24522@lemm.ee 4 points 2 months ago (10 children)

Nice straw man infographic, but I’m not sure how it’s relevant.

My post was about methods to poison art scraping models. I said nothing about my reasons for doing so, maybe I just like fucking up corpos. Maybe I just like thinking about interesting topics and hearing other people’s ideas.

Kind of sad that you’re worked up enough about this to both miss the point, and to have an infographic on hand just in case you get offended by anyone not praising generative AI.

If you do have any knowledge of how AI functions, I’d be happy to hear your thoughts on the topic which, again, is on how to poison models that use image scrapers, not the ethics of AI or lack thereof.

[–] hihi24522@lemm.ee 4 points 2 months ago

That thread was hard to read. I do sometimes feel bad for people who don’t understand artists because they don’t realize they have the capability to be artists themselves.

I do realize that any poisoning of models won’t stop backups of the pre-decay models from being utilized, but if we make the web unscrapable, it will slow or even prevent art from being stolen in the future.

I highly doubt the big AI companies get people to screen the scraped images (at least not all of them) because the whole point in their mind is to remove the need to pay people for work lol

[–] hihi24522@lemm.ee 6 points 2 months ago

This thought did cross my mind, but I bet the quality filters probably check for the relevancy of words. And if they don’t already, it wouldn’t take long for them to implement a simple fix.

Generating an AI image based on the text you randomly generate would satisfy this and still cause model decay, but in both cases, generating AI images is pretty costly which means it’s not a very viable attack option for most people.

[–] hihi24522@lemm.ee 3 points 2 months ago* (last edited 2 months ago) (1 children)

I’d heard of Glaze before and Nightshade seems useful, but only Glaze protects against mimicry and the Nightshade page makes it seem like the researchers aren’t sure how well the two would do together.

It looks like Nightshade is doing what I described (though on a single image basis) of trying to trick the AI into believing the characteristics of one thing apply to another, but I’d imagine that poisoning could be much more potent if the constraint of “still looks the same to a human” were voided.

If you know you’re feeding an AI, you can go all out on the poisoning. No one cares what it looks like as long as the AI thinks it’s valid.

As for the difficulties in generating meaningful images, it would certainly be more intense than Markov chain text generation, but I think it might not be that hard if you just modify the real art from the site.

Say you just slapped a ton of Snapchat filters on an artwork, or used a blur tool in random places, or drew random line segments that are roughly the same color as their nearby pixels, and maybe shift the hue and saturation. I bet small modifications like that could slip through quality filters but still cause damage to the model.


Edit: Just realized this might sound like I’m suggesting that messing up the art shown on the site through more destructive means would be better than Glaze or Nightshade. That’s not what I meant.

Those edit suggestions were only for the art shown in the tarpit, so you’d only make those destructive modifications to the art you’re showing the AI scrapers. The source images shown to human patrons can remain unedited.

[–] hihi24522@lemm.ee 9 points 2 months ago

Thanks, idk if op needed this but I did

[–] hihi24522@lemm.ee 1 points 2 months ago

Okay but see, in the case of CleanFlicks, that makes sense. It’s terrible because someone purposefully butchered it, not because it was a terrible film to begin with.

Coincidentally, the family member I mentioned in my rant is still very Mormon and is the kind that wants VidAngel so they can watch movies like that.

I remember watching Iron Man with them on it and yeah, you couldn’t really follow the movie at all. Plus, Iron man isn’t even that “inappropriate” to begin with, I can’t imagine how short an R-rated movie would be with the “filth” removed.

 
 

cross-posted from: https://slrpnk.net/post/16229238

Beneral disease rule

28
Acquire (infosec.pub)
 
 

cross-posted from: https://lemmy.world/post/22604090

so you mean it's all a bunch of bullshit, eh? huh. well isn't that something.

 

cross-posted from: https://sh.itjust.works/post/28142393

Plans for the weekend

 

cross-posted from: https://lemmy.world/post/21749544

It for us fr fr

 

cross-posted from: https://mastodon.social/users/jlou/statuses/113366186830322858

I hate elasticity of demand

I hate elasticity of demand

@politicalmemes

 

cross-posted from: https://lemm.ee/post/45615955

Bollard rule

 

cross-posted from: https://lemmy.world/post/20135038

Has anyone let David Icke know about this?

 
view more: ‹ prev next ›