technocrit

joined 2 years ago
MODERATOR OF
[–] technocrit@lemmy.dbzer0.com 2 points 1 month ago

There's no excuse because it's completely normal.

[–] technocrit@lemmy.dbzer0.com 3 points 1 month ago* (last edited 1 month ago)

Usually they don't even have to try.

[–] technocrit@lemmy.dbzer0.com 4 points 1 month ago* (last edited 1 month ago)

They'll go to some red state where fascist stormtroopers keep their boots on the necks of the poor... (ok yeah florida)

[–] technocrit@lemmy.dbzer0.com 10 points 1 month ago* (last edited 1 month ago)

Yeah ofc, he's a billionaire.

Nobody who massively exploits people for their own disgusting privilege is ever a good person.

[–] technocrit@lemmy.dbzer0.com 24 points 1 month ago* (last edited 1 month ago)

These are the ultra privileged who always get their candidates without any debate whatsoever.

This is the extremely rare exception where they show their faces.

[–] technocrit@lemmy.dbzer0.com 1 points 1 month ago* (last edited 1 month ago)

Money represents social hierarchy/control. If money were distributed from the rich to the poor, the astounding thing wouldn't be the numbers in bank accounts but rather the complete inversion of capitalist values. And if these values/systems are inverted, then money is meaningless.

I don't think these comparisons are supposed to be literal. It's more about the massive inequality and anti-human values of the current system. Realistically capitalism is violently enforced by states, etc. so there will continue to be no justice in the foreseeable future.

[–] technocrit@lemmy.dbzer0.com 2 points 1 month ago

If you actually read the article, they cite specific publications: NYTimes, Atlantic, New Yorker, NYPost, WSJ, etc.

If you want to write a counter article about your google search, go for it.

[–] technocrit@lemmy.dbzer0.com 3 points 1 month ago

Yeah when you hear dude babbling about the "cruelty of nature" or whatnot... He's just talking about himself.

[–] technocrit@lemmy.dbzer0.com -5 points 1 month ago* (last edited 1 month ago)

My sis had to loosen her vegetarian standards for the sake of her daughters health.

Cool anecdote.

We are omnivores, accept that fact.

Cool pseudo-science derived from an anecdote. Even if I believed your story, in reality people don't actually need meat.

it requires personal effort and research to find substitutes

Yes, it takes effort to do something different from the rest of society. Apparently you've never done this.

And most just prefer some meat or fish now and then.

Cool story.

Even sheep eat chicks, there, i said it.

Nobody gives a shit. Nobody is saying go veg because sheep are veg. This is just more wacky carnist babbling.

[–] technocrit@lemmy.dbzer0.com -1 points 1 month ago* (last edited 1 month ago)

Yes, that's the cognitive dissonance of carnism. "How could torturing one animal possibly be the same as torturing another? We have different categories for torturing!!!!" smh.

 

... New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.

ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for automated decision-making by the government.

This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.

The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.

What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination...

 

cross-posted from: https://lemmy.world/post/31121462

OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can't think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can't think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

 

[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

 

After I post an article, I can see where the article was cross-posted. I would like to see this before posting, so I don't repost the same article to the same comm.

Is it possible to look up if/where an article is posted (without posting it)?

Sorry if this is the wrong comm for this. Thanks.

 

cross-posted from: https://lemmy.world/post/31009163

London's Met Police are trialling Israeli-developed SandCat armoured vehicles, previously "battle-tested" in Gaza, for high-risk operations involving "serious public disorder".

SandCats are manufactured by Israeli defence company Plasan, which for over three decades has been a key supplier of deadly equipment to the Israeli military.

According to Israel Defence, Plasan supplied up to 700 SandCats to the Israeli military following the start of Israel’s ongoing genocide on Gaza.

 

The alleged conspiracy spanned nearly a decade and involved the domestic transport of thousands of noncitizens from Mexico and Central America, including some children, in exchange for thousands of dollars, according to the indictment.

Abrego Garcia is alleged to have participated in more than 100 such trips, according to the indictment. Among those allegedly transported were members of the Salvadoran gang MS-13, sources familiar with the investigation said.

Abrego Garcia is the only member of the alleged conspiracy charged in the indictment.

 

cross-posted from: https://lemmy.ml/post/31288698

In The Political Economy of Human Rights, Noam Chomsky and Edward S. Herman argued that the American ruling class and corporate media regard bloodbaths as being constructive, nefarious or benign. A constructive bloodbath is typically carried out by the US or one of its proxies, and is endorsed in establishment media. The most obvious contemporary example is the genocidal US/Israeli campaign in Gaza, approved by media commentators in the New York Times, Wall Street Journal and Washington Post.

The two other approaches that Chomsky and Herman outline illuminate the corporate media’s approach to Syria. When Bashar al-Assad was in power in Syria and the US was seeking his overthrow, corporate media treated killings that his government and its allies carried out as nefarious bloodbaths: Their violence was denounced in corporate press with unambiguous language, and prompted demands that the US intervene against them.

In the months since Syrian President Ahmed al-Sharaa came to power, with substantial assistance from the US and its partners, his government has opened Syria’s economy to international capital, arrested Palestinian resistance fighters, indicated that it’s open to the prospect of normalizing relations with Israel, and opted not to defend Syria against Israel’s frequent bombings and ever-expanding occupation of Syrian land. In that context, Washington has embraced Damascus, with Trump praising al-Sharaa personally, and finally lifting the brutal sanctions regime on Syria.

As these developments have unfolded, US media have switched from treating bloodbaths in Syria as nefarious to treating them as benign. A benign bloodbath is one to which corporate media are largely indifferent. They may not openly cheer such killings, but the atrocities get minimal attention, and don’t elicit high-volume denunciations. There are few if any calls for perpetrators to be brought to justice or ousted from government.

 

India has more than 300 million unorganised sector workers. Many suffer from withheld wages, endless toil and coercion – telltale signs, according to the ILO, of forced labour.

 

Sen. Ted Cruz (R-Texas) wants to enforce a 10-year moratorium on AI regulation by making states ineligible for broadband funding if they try to impose any limits on development of artificial intelligence.

view more: ‹ prev next ›