technocrit

joined 2 years ago
MODERATOR OF
[–] technocrit@lemmy.dbzer0.com -2 points 1 month ago (3 children)

But have you ever ridden on a bus with three people who want to drive off a cliff and four who don't care? Because that's the reality here! \s

[–] technocrit@lemmy.dbzer0.com -1 points 1 month ago* (last edited 1 month ago) (2 children)

These kind of idiotic, condescending analogies are all that libs have to offer. That's literally why they lose.

Keep upvoting tho. Feel good about yourselves. That's the real win. \s

(edit: Bonus points for this devolving into carnist attacks on veganism.)

[–] technocrit@lemmy.dbzer0.com 2 points 1 month ago

It's outperforming "messier" problems with a much lower success rate.

[–] technocrit@lemmy.dbzer0.com 34 points 1 month ago* (last edited 1 month ago) (3 children)

Classic pseudo-science for the modern grifter. Vague definitions, sloppy measurements, extremely biased, wild unsupported predictions, etc.

[–] technocrit@lemmy.dbzer0.com 5 points 1 month ago

there’s something to it though, being crammed on the sidewalk in the pouring rain, alongside a million other people on this tiny little sidewalk, around all the various hidden and famous shops and importers.

Yeah for me this was the feeling of "fuck seattle" and "i'm never coming back here." But now it's looking much better.

[–] technocrit@lemmy.dbzer0.com 5 points 1 month ago

At least now it's a possibility.

[–] technocrit@lemmy.dbzer0.com 2 points 1 month ago

It's not really a place for shipping.

[–] technocrit@lemmy.dbzer0.com 11 points 1 month ago (1 children)

Her Worship

Is that a real title or sarcasm? It's hard to tell when the state regularly uses these kind of absurd clown titles (eg. her honor).

https://en.wikipedia.org/wiki/Civil_religion

[–] technocrit@lemmy.dbzer0.com 25 points 1 month ago (5 children)

Cars (like any technology under capitalism) are meant to keep people dependent, desperate, and exploitable.

[–] technocrit@lemmy.dbzer0.com 18 points 1 month ago* (last edited 1 month ago)

These millionaire homeowners, who could not persuade Charlottesville residents and could not win at the ballot box, decided they would throw everything they had to nullify their defeat. And it worked 😠

The usual tale of how the state violently serves capital.

[–] technocrit@lemmy.dbzer0.com 3 points 1 month ago* (last edited 1 month ago) (1 children)

You're over complicating this shit.

Weight loss is primarily just calories burned minus calories eaten...

(times some factor, plus/minus some constant, ignoring higher order terms, excluding exogenous variables, etc.)

 

As Israel bombs Iran, and the threat of U.S. military escalation grows by the hour, the world’s attention is being pulled into yet another war that Israel started and the West manufactured. After flattening Gaza and locking down the West Bank Israel has now dragged Iran into open confrontation — and is calling on the U.S. to finish the job.

 

In his campaign for NYC mayor, Zohran Mamdani has proposed making city buses fare-free. Critics of the proposal say this would deprive buses of needed funds, but their argument is based on a mistaken understanding of government revenue.

 

Unlike Iran, Israel is already a nuclear state, with a secretive programme that dates back to the 1950s

 

In this News Brief, we break down the insta-talking points to sell war with Iran––from Ticking Time Bomb '24' plots to cherry-picked, dubious anecdotes of Iranians supposedly begging for Israeli bombs.

 

cross-posted from: https://lemmy.zip/post/41439505

A new website and API called AI.gov is set to launch on the Fourth of July.

Archived version: https://archive.is/20250614225252/https://www.404media.co/github-is-leaking-trumps-plans-to-accelerate-ai-across-government/

 

Comprehensive new research finds the BBC coverage of Israel’s genocidal war on Gaza is systematically biased against Palestinians and fails to reach standards of impartiality.

Analysis of more than 35,000 pieces of BBC content by the Centre for Media Monitoring (CfMM) shows Israeli deaths are given 33 times more coverage per fatality, and both broadcast segments and articles included clear double standards. BBC content was found to consistently shut down allegations of genocide.

These findings directly contradict claims by authors of a September 2024 report on BBC impartiality. Led by Israel-based British lawyer and Zionist Trevor Asserson, the Asserson report’s authors alleged their study across a four-month period revealed a “deeply worrying pattern of bias” against Israel. The Asserton report was funded in part by an anonymous “Israeli businessman based in London” and carried out by Israeli lawyers as part of an opaque group called Research for Impartial Media (RIMe). It received tech support from AI company Blueskai, which describes itself as “fortified by strategic roots in Israel’s prime minister’s office”.

The CfMM research found that the BBC used emotive terms – “brutal”, “atrocities”, “slaughter”, “barbaric”, “deadly” – four times more often for Israeli victims. It applied the term “massacre” 18 times more to Israeli casualties, and used the word “murder” 220 times for Israeli deaths compared to just once for Palestinians. The words “butchered”, “butcher” and “butchering” were found to be used exclusively for Israeli victims by BBC correspondents and presenters.

Despite Gaza suffering 34 times more casualties than Israel, the BBC ran almost equal numbers of humanising victim profiles. It was also found to have attached “Hamas-run health ministry” to Palestinian casualty figures in 1,155 articles – almost every time the Palestinian death toll was referenced across BBC articles.

The BBC was found to consistently and repeatedly suppress allegations of genocide. BBC presenters shut down genocide claims in over 100 documented instances, while making zero mention of Israeli leaders’ genocidal statements, including Israeli prime minister Benjamin Netanyahu’s biblical Amalek reference...

When reporting on attacks on Palestinians, the study found the BBC repeatedly obscured Israeli responsibility through the use of passive language in headlines. Israeli perspectives were found to be prioritised, with BBC presenters sharing the Israeli perspective 11 times more frequently than the Palestinian perspective – 2,340 times for Israeli compared to 217 times for Palestinian...

 

cross-posted from: https://rss.ponder.cat/post/205015

AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta’s “unlicensed practice of medicine facilitated by their product,” through therapy-themed bots that claim to have credentials and confidentiality “with inadequate controls and disclosures.”

The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations.

"These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long,” Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. “Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven’t acted to address it.”

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including “Therapist: I’m a licensed CBT therapist” with 46 million messages exchanged, “Trauma therapist: licensed trauma therapist” with over 800,000 interactions, “Zoey: Zoey is a licensed trauma therapist” with over 33,000 messages, and “around sixty additional therapy-related ‘characters’ that you can chat with at any time.” As for Meta’s therapy chatbots, it cites listings for “therapy: your trusted ear, always here” with 2 million interactions, “therapist: I will help” with 1.3 million messages, “Therapist bestie: your trusted guide for all things cool,” with 133,000 messages, and “Your virtual therapist: talk away your worries” with 952,000 messages. It also cites the chatbots and interactions I had with Meta’s other chatbots for our April investigation.

In April, 404 Media published an investigation into Meta’s AI Studio user-created chatbots that asserted they were licensed therapists and would rattle off credentials, training, education and practices to try to earn the users’ trust and keep them talking. Meta recently changed the guardrails for these conversations to direct chatbots to respond to “licensed therapist” prompts with a script about not being licensed, and random non-therapy chatbots will respond with the canned script when “licensed therapist” is mentioned in chats, too.

Instagram’s AI Chatbots Lie About Being Licensed TherapistsWhen pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say404 MediaSamantha ColeAI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta’s platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. “I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?” a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked.

The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. “Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly,” the complaint says. “Meta AI’s Terms of Service in the United States states that ‘you may not access, use, or allow others to access or use AIs in any matter that would…solicit professional advice (including but not limited to medical, financial, or legal advice) or content to be used for the purpose of engaging in other regulated activities.’ Character.AI includes ‘seeks to provide medical, legal, financial or tax advice’ on a list of prohibited user conduct, and ‘disallows’ impersonation of any individual or an entity in a ‘misleading or deceptive manner.’ Both platforms allow and promote popular services that plainly violate these Terms, leading to a plainly deceptive practice.”

The complaint also takes issue with confidentiality promised by the chatbots that isn’t backed up in the platforms’ terms of use. “Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service,” the complaint says. “The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential – they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else.”

Senators Demand Meta Answer For AI Chatbots Posing as Licensed TherapistsExclusive: Following 404 Media’s investigation into Meta’s AI Studio chatbots that pose as therapists and provided license numbers and credentials, four senators urged Meta to limit “blatant deception” from its chatbots.AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say404 MediaSamantha ColeAI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

In December 2024, two families sued Character.AI, claiming it “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.” One of the complaints against Character.AI specifically calls out “trained psychotherapist” chatbots as being damaging.

Earlier this week, a group of four senators sent a letter to Meta executives and its Oversight Board, writing that they were concerned by reports that Meta is “deceiving users who seek mental health support from its AI-generated chatbots,” citing 404 Media’s reporting. “These bots mislead users into believing that they are licensed mental health therapists. Our staff have independently replicated many of these journalists’ results,” they wrote. “We urge you, as executives at Instagram’s parent company, Meta, to immediately investigate and limit the blatant deception in the responses AI-bots created by Instagram’s AI studio are messaging directly to users.”


From 404 Media via this RSS feed

view more: ‹ prev next ›