technocrit

joined 2 years ago
MODERATOR OF
[–] technocrit@lemmy.dbzer0.com 7 points 2 months ago

USA would much rather support genocide in palestine than resist genocide in ukraine.

[–] technocrit@lemmy.dbzer0.com -2 points 2 months ago* (last edited 2 months ago)

This is just another drop in the bucket. Not even worth mentioning. It serves no use to imperial narrative.

USA is a racist genocidal empire. It actively supports fascism like this attack. But the far greater fascism is inflicted upon palestinians, iranians, lebanese, yemenis, syrians, etal. We don't ever hear an honest narrative about this terrorism. Forget about some fash pastor.

(Also crying about "american citizens" is just supporting attacks on refugees. This lib shit is really gross.)

[–] technocrit@lemmy.dbzer0.com 2 points 2 months ago

Nobody is ignoring these imperial invasions, genocides, etc.

USA is actively supporting them.

[–] technocrit@lemmy.dbzer0.com 4 points 2 months ago* (last edited 2 months ago)

Both hacktivists and Iranian government-affiliated actors

This false dichotomy is pure propaganda. There is probably no higher form of hacktivism than defending a population from an imperial genocide.

[–] technocrit@lemmy.dbzer0.com 4 points 2 months ago

The actual purpose of congress is to violently enforce capitalism by rubber stamping imperialism, policing, prisons, etc.

[–] technocrit@lemmy.dbzer0.com 9 points 2 months ago* (last edited 2 months ago)

YDI. Fake imperial bullshit about a "nuclear program" in Iran. Crying about "jihad". Imaginary nonsense about "exporting hate" when Iran is literally being terrorized. Every accusation is a confession. Get bent.

OP is crying about religion when they're completely indoctrinated into an imperial cult:

https://en.wikipedia.org/wiki/Civil_religion

[–] technocrit@lemmy.dbzer0.com -1 points 2 months ago (2 children)

Positively american.

[–] technocrit@lemmy.dbzer0.com 2 points 2 months ago* (last edited 2 months ago)

Yes, most people think that lies in service to violent control is bad.

[–] technocrit@lemmy.dbzer0.com -1 points 2 months ago

It only looks that way from inside the imperial bubble.

 

The European Union has reimposed tight limits on states’ budget deficits — but with exemptions for military spending. After years of claims that austerity was over, we’re now seeing it used selectively to put limits on democratic choice.

 

cross-posted from: https://lemm.ee/post/65376751

Screenshot without paywall: https://archive.is/rwThF

Under the Trump administration, multiple US government agencies are using AI and other tools to broadly track the social media of tourists and immigrants – and potentially to watch US citizens as well

It appears that the US federal government is investing more public funds in such tech tools, says Paromita Shah at Just Futures Law, an immigration advocacy non-profit in Washington DC. “We’re witnessing a real-time expansion of the use of social media monitoring technologies,” says Shah. “When you use social media monitoring to intimidate, harass, alienate, deport, incarcerate, arrest – when that becomes your standard to do those things – it’s antithetical to a lot of what democracy stands for.”

 

cross-posted from: https://lemmy.dbzer0.com/post/45477118

It's like RoboCop, except the robots are villains helping human villains villain more comprehensively.

 

cross-posted from: https://lemmy.dbzer0.com/post/45477118

It's like RoboCop, except the robots are villains helping human villains villain more comprehensively.

 

cross-posted from: https://lemmy.dbzer0.com/post/45430254

Germany has been one of the worst Western countries for whitewashing Israel’s genocide in Palestine. Now it wants to do it with AI.

 

cross-posted from: https://hexbear.net/post/5077595

cross-posted from: https://rss.ponder.cat/post/193608

No One Knows How to Deal With 'Student-on-Student' AI CSAM

Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of other their peers, according to a new report from researchers at Stanford Cyber Policy Center.

The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual causes, the practice has been normalized by mainstream online platforms and certain online communities.

“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”

The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”

The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”

One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.

One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”

Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”

One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”

In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”

One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.

💡Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.”

According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents

involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”

The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.”

The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline.

One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.

Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”


From 404 Media via this RSS feed

 

Silicon Valley has started treating AI like a religion. Literally. This week, Adam sits down with Karen Hao, author of EMPIRE OF AI: Dreams and Nightmares in Sam Altman’s OpenAI to talk about what it means for all of us when tech bros with infinite money think they’re inventing god. Find Karen's book at factuallypod.com/books

 

cross-posted from: https://fedia.io/m/technology@lemmy.world/t/2229944

I have no confidence that Tesla will fix this before the planned Robo-Taxi rollout in Austin in 2 weeks.

After all, they haven't fixed it in the last 9 years that self-driving Teslas have been on the road.

 

Internal instructions accessed by Canada’s National Observer show that the chatbot produces tailored scripts, petitions, reports and even speeches for council chambers. The messaging is all framed to resonate with municipal officials’ duty to represent local interests. The chatbot drops the cost of misinformation to "close to zero."

 

cross-posted from: https://lemm.ee/post/65135983

cross-posted from: https://lemm.ee/post/65135875

The Technological Republic, his new “treatise” urging executives and engineers to abandon their pursuit of “trivial consumer products” and recommit their capital and talent to a “national project,” Karp’s effort feels less like a timely cultural intervention and more like what you get when the boss’s pontifications go unchallenged for too long.

This carefully maintained mystique provides the perfect backdrop for Karp to play the eccentric intellectual, someone who drops incendiary statements with academic detachment. On a recent earnings call, he gloated, “Palantir is here to disrupt and make the institutions we partner with the very best in the world and, when it’s necessary, to scare enemies and on occasion kill them”; at the World Economic Forum, he dismissed the United Nations as “basically a discriminatory institution against anything good”; during an AI conference on Capitol Hill, he castigated pro-Palestinian protesters on college campuses as the adherents of a “pagan religion” and asserted that they should be sent to live in North Korea. These statements all come wrapped in eager demonstrations of his knowledge on the history of philosophy, art, and science—a performative erudition very different from the safe, generic corporate-speak of most of his peers. Though identifying as a liberal, Karp’s general strategy seems to be to position himself as a guy that can “talk sense” to the left, which of course appeals to the right—criticizing progressives for their ostensible lack of patriotism, naïve cosmopolitanism, and unwillingness to embrace US military power. It’s a disposition designed for dual purposes: to make the embrace of a hawkish foreign policy and digital dragnet technology appear as the thoughtful centrist stance, and to provide Palantir with a politically palatable counterweight to the more odious ideological baggage of Thiel.

Karp idealizes the Manhattan Project as a Platonic unity of the state and business. He romanticizes the World War II–era entanglement of science and government as a heroic partnership

Oh... great... especially considering this:

Trump signs executive orders to boost nuclear power, speed up approvals

To speed up the development of nuclear power, the orders grant the U.S. energy secretary authority to approve some advanced reactor designs and projects, taking authority away from the independent safety agency that has regulated the U.S. nuclear industry for five decades.

 

Flock's automatic license plate reader (ALPR) cameras are in more than 5,000 communities around the U.S. Local police are doing lookups in the nationwide system for ICE.

view more: ‹ prev next ›