Fuck AI

3600 readers
443 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
2
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
4
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
6
 
 

I just need to rant somewhere. I've been looking for work for a long time, and so LinkedIn is part of the sites I regularly check for jobs.

Today I had a message from some asshat saying they've launched an AI marketplace and behold, they need my help to improve it! I could be making thousands a week detecting fringe cases that AI isn't yet good at solving! And I could be making friends with like minded assholes I mean creative experts helping AI!

Doesn't it sound like a dream???

The worst part is that I don't think it was a scam. For sure one wouldn't be making thousands per week, I'm sure that's misleading, but the rest of the platform sounded believable enough. Which means there's going to be plenty of "experts" flocking to it.

7
 
 

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats "visible to millions." While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.

OpenAI's chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat.

Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted:

"When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results."

At first, OpenAI defended the labeling as "sufficiently clear," Fast Company reported Thursday. But Stuckey confirmed that "ultimately," the AI company decided that the feature "introduced too many opportunities for folks to accidentally share things they didn't intend to." According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences.

Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was "shocked" that Google was logging "these extremely sensitive conversations." OpenAI promises to remove Google search results

Stuckey called the feature a "short-lived experiment" that OpenAI launched "to help people discover useful conversations." He confirmed that the decision to remove the feature also included an effort to "remove indexed content from the relevant search engine" through Friday morning.

Google did not respond to Fast Company's reporting, which left it unclear what role it played in how chats were displayed in search results. But a spokesperson told Ars that OpenAI was fully responsible for the indexing, clarifying that "neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines."

OpenAI is seemingly also solely responsible for removing the chats, perhaps most quickly by using a tool that Google provides to block pages from appearing in search results. But that tool does not stop pages from being indexed by other search engines, so it's possible chats will disappear sooner in Google results than other search engines.

Véliz told Fast Company that even a "short-lived" experiment like this is "troubling," noting that "tech companies use the general population as guinea pigs," attracting swarms of users with new AI products and waiting to see what consequences they may face for invasive design choices.

"They do something, they try it out on the population, and see if somebody complains," Véliz said.

To check if private chats are still being indexed, a Fast Company explanation suggests that users who still have access to their shared links can try inputting the "part of the link created when someone proactively clicks 'Share' on ChatGPT [to] uncover conversations" that may still be discoverable on Google.

OpenAI declined Ars' request to comment, but Stuckey's statement suggested that the company knows it has to earn back trust after the misstep.

"Security and privacy are paramount for us, and we'll keep working to maximally reflect that in our products and features," Stuckey said.

The scandal notably comes after OpenAI vowed to fight a court order that requires it to preserve all deleted chats "indefinitely," which worries ChatGPT users who previously felt assured their temporary and deleted chats were not being saved. OpenAI has so far lost that fight, and those chats will likely be searchable soon in that lawsuit. But while OpenAI CEO Sam Altman considered the possibility that users' most private chats could be searched to be "screwed up," Fast Company noted that Altman did not seem to be as transparently critical about the potential for OpenAI's own practices to expose private user chats on Google and other search engines.

By Ashley Belanger - Senior Policy Reporter

8
9
 
 

The award covering the next decade is one of the largest DoD contracts ever, cementing the tech firm’s role in warfighting for years to come.

Archived version: https://archive.is/20250801051530/https://www.washingtonpost.com/technology/2025/07/31/palantir-army-contract-10bn/

10
 
 
11
12
13
 
 

cross-posted from: https://programming.dev/post/34926893

Experimenting with unproven technology to determine whether a child should be granted protections they desperately need and are legally entitled to is cruel and unconscionable.

14
15
16
17
18
19
20
21
22
23
24
25
view more: next ›