this post was submitted on 07 Aug 2025
405 points (88.6% liked)

Technology

73876 readers
3389 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NocturnalMorning@lemmy.world -2 points 4 days ago (12 children)

Human being checking in here, I am appalled by the current usage of AI.

This study is bullshit.

[–] floofloof@lemmy.ca 35 points 4 days ago* (last edited 4 days ago) (10 children)

Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?

The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.

[–] UnderpantsWeevil@lemmy.world -4 points 4 days ago (3 children)

It does feel a bit like the magazine is gunning for the "Don't like AI? What are you, queer?" angle.

[–] NoneOfUrBusiness@fedia.io 8 points 4 days ago (1 children)

The article contains nothing of the sort and I have no idea why you came to that conclusion.

[–] UnderpantsWeevil@lemmy.world 0 points 4 days ago (1 children)

I believe that a future built on AI should account for the people the technology puts at risk.

I've seen various iterations of this column a thousand times before. The underlying message is always "AI is going to get shoved down your throat one way or another, so let's talk about how to make it more palpable."

The author (and, I'm assuming there's a human writing this, but its hardly a given) operates from the assumption that

identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories

but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.

[–] NoneOfUrBusiness@fedia.io 1 points 4 days ago

This is a reasonable point, but it's also not what you said previously.

load more comments (1 replies)
load more comments (7 replies)
load more comments (8 replies)