this post was submitted on 01 Nov 2023
341 points (97.8% liked)

Technology

74098 readers
2394 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 22 comments
sorted by: hot top controversial new old
[–] smegger@aussie.zone 80 points 2 years ago (2 children)

I'm not saying they're in the right, but once you put stuff on the internet it's near impossible to stop people doing what they want with it

[–] realharo@lemm.ee 34 points 2 years ago* (last edited 2 years ago)

That's only true for people who don't care about operating lawfully. A big company cannot practically afford to do the same things as some random fly under the radar niche community.

That being said, this is a US company, so that may be a problem.

[–] Granixo@feddit.cl 31 points 2 years ago

Maybe they shouldn't have left the EU.

[–] autotldr@lemmings.world 5 points 2 years ago

This is the best summary I could come up with:


LONDON — Britain’s top privacy regulator has no power to sanction an American-based AI firm which harvested vast numbers of personal photos for its facial recognition software without users' consent, a judge has ruled.

The New York Times reported in 2020 that Clearview AI had harvested billions of social media images without users’ consent.

The Information Commissioner’s Office (ICO) took action against Clearview last year, alleging it had unlawfully collected the data of British subjects for behavior-monitoring purposes.

Lawyers have pointed out that the company was under no obligation to purge Brits’ pictures from its database until the appeal was determined — and yesterday’s ruling applied not only to the fine, but the deletion order too.

The identity-matching technology, trained on photos scraped without permission from social media platforms and other internet sites, was initially made available to a range of business users as well as law enforcement bodies.

Following a 2020 lawsuit from the American Civil Liberties Union, the company now only offers its services to federal agencies and law enforcement in the U.S. Yesterday’s judgment revealed it also has clients in Panama, Brazil, Mexico, and the Dominican Republic.


The original article contains 627 words, the summary contains 190 words. Saved 70%. I'm a bot and I'm open source!

[–] xenomor@lemmy.world -2 points 2 years ago (1 children)

What? Someone downloaded photos that people willingly uploaded to a public network? You don’t say.

[–] Tosti@feddit.nl 15 points 2 years ago* (last edited 2 years ago) (3 children)

I think this argument is silly. It's like if you went Out in public and paparazzi started haunting everyone out on the street, all the time, even though you are no-one famous.

There is such a thing as privacy, and the fact that I uploaded a picture does not give some other random company the right to wholesale process my images.

We should resist giving companies these rights.

[–] aidan@lemmy.world 4 points 2 years ago (2 children)

Except it's not because these are photos people are choosing to post.

[–] foenkyfjutschah@programming.dev 2 points 2 years ago (1 children)

as someone who crossed a tourist hotspot to get to the cantina for some years, my experience can't confirm your statement.

[–] aidan@lemmy.world 1 points 2 years ago

The problem there is someone taking photos of you without your consent, not AI analyzing the photos

[–] Tosti@feddit.nl 1 points 2 years ago

But not to be harvested by third parties to be used for God knows what. Sure the occasional meme is one thing (and can have an impact) but this impacts everyone, everywhere, all the time. It's not just that one photo, it's all photos of everyone and also other info like metadata, text posts etc. The fact we don't even know how what and where the data is collected we don't even know what they have combined into profiles.

And this does not even speak to potential harm that could come from incorrectly associated info in your profile.

[–] Apollo2323@lemmy.dbzer0.com -1 points 2 years ago (1 children)

I am a privacy advocate but I will have to disagree with you. There is no such thing as privacy on public places , or in the public internet. If you upload a picture to the internet publicly then it is publicly available to everyone.

[–] Tosti@feddit.nl 4 points 2 years ago

We can disagree here. But if I upload a picture with a specific intent (sharing it in my insta feed for example) why do other companies then have the right to wholesale take these images and use them for other purposes? I think they don't.

And there is a serious constraint on privacy violations like taking my picture when I'm out and about, since the photographer can only be on one place at a time.

What we see here is privacy violations by automated systems on a scale never before seen. Just by taking the photos and processing them.