technocrit

joined 2 years ago
MODERATOR OF
[–] technocrit@lemmy.dbzer0.com 6 points 3 months ago* (last edited 3 months ago) (1 children)

Who said anything about moving to alaska? People just want a reasonable, sustainable society. Not murdering most of the planet for the sake of concrete, gas, and self-destruction.

[–] technocrit@lemmy.dbzer0.com 3 points 3 months ago* (last edited 3 months ago) (1 children)

That's a big turd of a false dichotomy.

The alternative is a sustainable future for the planet.

[–] technocrit@lemmy.dbzer0.com 2 points 3 months ago* (last edited 3 months ago)

"AI" is a pseudo-scientific grift.

Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

[–] technocrit@lemmy.dbzer0.com 1 points 3 months ago

"""""""""""""""""""""""""""""""paradise"""""""""""""""""""""""""""""""

[–] technocrit@lemmy.dbzer0.com 0 points 3 months ago* (last edited 3 months ago) (11 children)

I don't think y'all are disagreeing but maybe this sentence is somewhat confusing:

If you think LLMs doesnt think (I won’t argue that they arent extremely dumb), please define what is thinking,

Maybe the "doesnt" shouldn't be there.

[–] technocrit@lemmy.dbzer0.com 9 points 3 months ago* (last edited 3 months ago) (1 children)

LLMs can’t think - only generate statistically plausible patterns

Ah still rolling out the old “stochastic parrot” nonsense I see.

Ah still rolling out the old "computers think" pseudo-science.

I have used LLMs to get some work done and… guess what, it did the work!

Ah yes the old pointless vague anecdote.

What psychological hazard have I fallen for exactly?

Promoting pseudo-science.

Overall D. Neither interesting nor new nor useful.

[–] technocrit@lemmy.dbzer0.com 2 points 3 months ago* (last edited 3 months ago)

Wait until he finds out his holy document was concocted by genocidal enslavers... Functioning as intended.

https://en.wikipedia.org/wiki/Civil_religion

[–] technocrit@lemmy.dbzer0.com 1 points 3 months ago* (last edited 3 months ago)

The difference between reasoning models and normal models is reasoning models are two steps,

That's a garbage definition of "reasoning". Someone who is not a grifter would simply call them two-step models (or similar), instead of promoting misleading anthropomorphic terminology.

[–] technocrit@lemmy.dbzer0.com 0 points 3 months ago

The funny thing about this "AI" griftosphere is how grifters will make some outlandish claim and then different grifters will "disprove" it. Plenty of grant/VC money for everybody.

 

Considering all the lethal obstacles Palestinian journalists must contend with to do their jobs—not to mention the psychological toll of having to report genocide day in and day out while essentially serving as moving targets for the Israelis—it seems the least their international media colleagues might do is acknowledge them in death. Alas, mum’s the word.

And on that note, it’s worth recalling some of Shabat’s own words: “All we need is for you not to leave us alone, screaming until our voices go hoarse, with no one to hear us.”

 

Considering all the lethal obstacles Palestinian journalists must contend with to do their jobs—not to mention the psychological toll of having to report genocide day in and day out while essentially serving as moving targets for the Israelis—it seems the least their international media colleagues might do is acknowledge them in death. Alas, mum’s the word.

And on that note, it’s worth recalling some of Shabat’s own words: “All we need is for you not to leave us alone, screaming until our voices go hoarse, with no one to hear us.”

 

Considering all the lethal obstacles Palestinian journalists must contend with to do their jobs—not to mention the psychological toll of having to report genocide day in and day out while essentially serving as moving targets for the Israelis—it seems the least their international media colleagues might do is acknowledge them in death. Alas, mum’s the word.

And on that note, it’s worth recalling some of Shabat’s own words: “All we need is for you not to leave us alone, screaming until our voices go hoarse, with no one to hear us.”

 

Recent coverage of Gaza and the West Bank illustrates that, while corporate media occasionally outright call for expelling Palestinians from their land, more often the way these outlets support ethnic cleansing is by declining to call it ethnic cleansing.

 

Recent coverage of Gaza and the West Bank illustrates that, while corporate media occasionally outright call for expelling Palestinians from their land, more often the way these outlets support ethnic cleansing is by declining to call it ethnic cleansing.

 

Recent coverage of Gaza and the West Bank illustrates that, while corporate media occasionally outright call for expelling Palestinians from their land, more often the way these outlets support ethnic cleansing is by declining to call it ethnic cleansing.

 

FAIR has been among the many groups who have warned that a second Trump administration could see a severe attack against the free press and free speech generally. Ozturk’s arrest is a warning that the Trump administration takes all levels of speech and journalism seriously, and will do whatever they can to terrorize the public into keeping quiet.

 

FAIR has been among the many groups who have warned that a second Trump administration could see a severe attack against the free press and free speech generally. Ozturk’s arrest is a warning that the Trump administration takes all levels of speech and journalism seriously, and will do whatever they can to terrorize the public into keeping quiet.

 

FAIR has been among the many groups who have warned that a second Trump administration could see a severe attack against the free press and free speech generally. Ozturk’s arrest is a warning that the Trump administration takes all levels of speech and journalism seriously, and will do whatever they can to terrorize the public into keeping quiet.

 

“Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED.

Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would implement fairer policies and better killing protocols.

We’ll admit: the narrative seems compelling at first glance. There are major dangers in incorporating AI into corporate and military procedures. But in these cases, the AI isn’t the culprit; the people making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The IDF would kill civilians no matter what technology was, or wasn’t, available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame and why is it becoming so common as a responsibility-avoidance framing?

On today’s episode, we’ll dissect the genre of “investigative” reporting on the dangers of AI, examining how it serves as a limited hangout, offering controlled criticism while ultimately shifting responsibility toward faceless technologies and away from powerful people.

Later on the show, we’ll be speaking with Steven Renderos, Executive Director of MediaJustice, a national racial justice organization that advances the media and technology rights of people of color. He is the creator and co-host, with the great Brandi Collins-Dexter, Bring Receipts, a politics and pop culture podcast and is executive producer of Revolutionary Spirits, a 4-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.

 

“Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED.

Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would implement fairer policies and better killing protocols.

We’ll admit: the narrative seems compelling at first glance. There are major dangers in incorporating AI into corporate and military procedures. But in these cases, the AI isn’t the culprit; the people making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The IDF would kill civilians no matter what technology was, or wasn’t, available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame and why is it becoming so common as a responsibility-avoidance framing?

On today’s episode, we’ll dissect the genre of “investigative” reporting on the dangers of AI, examining how it serves as a limited hangout, offering controlled criticism while ultimately shifting responsibility toward faceless technologies and away from powerful people.

Later on the show, we’ll be speaking with Steven Renderos, Executive Director of MediaJustice, a national racial justice organization that advances the media and technology rights of people of color. He is the creator and co-host, with the great Brandi Collins-Dexter, Bring Receipts, a politics and pop culture podcast and is executive producer of Revolutionary Spirits, a 4-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.

 

“Israel built an ‘AI factory’ for war. It unleashed it in Gaza,” laments the Washington Post. “Hospitals Are Reporting More Insurance Denials. Is AI Driving Them?,” reports Newsweek. “AI Raising the Rent? San Francisco Could Be the First City to Ban the Practice,” announces San Francisco’s KQED.

Within the last few years, and particularly the last few months, we’ve heard this refrain: AI is the reason for an abuse committed by a corporation, military, or other powerful entity. All of a sudden, the argument goes, the adoption of “faulty” or “overly simplified” AI caused a breakdown of normal operations: spikes in health insurance claims denials, the skyrocketing of consumer prices, the deaths of tens of thousands of civilians. If not for AI, it follows, these industries and militaries, in all likelihood, would implement fairer policies and better killing protocols.

We’ll admit: the narrative seems compelling at first glance. There are major dangers in incorporating AI into corporate and military procedures. But in these cases, the AI isn’t the culprit; the people making the decisions are. UnitedHealthcare would deny claims regardless of the tools at its disposal. Landlords would raise rents with or without automated software. The IDF would kill civilians no matter what technology was, or wasn’t, available to do so. So why do we keep hearing that AI is the problem? What’s the point of this frame and why is it becoming so common as a responsibility-avoidance framing?

On today’s episode, we’ll dissect the genre of “investigative” reporting on the dangers of AI, examining how it serves as a limited hangout, offering controlled criticism while ultimately shifting responsibility toward faceless technologies and away from powerful people.

Later on the show, we’ll be speaking with Steven Renderos, Executive Director of MediaJustice, a national racial justice organization that advances the media and technology rights of people of color. He is the creator and co-host, with the great Brandi Collins-Dexter, Bring Receipts, a politics and pop culture podcast and is executive producer of Revolutionary Spirits, a 4-part audio series on the life and martyrdom of Mexican revolutionary leader Francisco Madero.

view more: ‹ prev next ›