klu9

joined 5 months ago
MODERATOR OF
[–] klu9@lemmy.ca 3 points 2 months ago (1 children)

TV in the UK has a substantially different regulatory framework and environment to the US and some other countries.

That's why even when Sky News (UK) was Murdoch-owned, it was a respected news source; it was the prestige loss leader he used to help get the rest of Sky regulator-approval. Compare to Sky News Australia; same name but full-on swivel-eyed Fox News-style bollocks.

At least for now, I believe it's easier to keep this sort of thing in check (on TV; the papers, oof!) in the UK. But people have to stay vigilant and make sure regulators know folks are keeping an eye on both them and the likes of GB News.

[–] klu9@lemmy.ca 2 points 2 months ago

I know so much about owls thanks to the Infosphere.

[–] klu9@lemmy.ca 2 points 2 months ago
[–] klu9@lemmy.ca 4 points 2 months ago

You made it happen!!! Many thanks!

[–] klu9@lemmy.ca 3 points 2 months ago (2 children)

My 1st #Monsterdon and 1st watch party. Man, the posts were coming thick and fast (1-6 per scond!). I don't think I could handle it with a movie I hadn't already seen.

Great fun, at least the posts I was able to read ;)

[–] klu9@lemmy.ca 5 points 2 months ago

Conky users?

[–] klu9@lemmy.ca 1 points 2 months ago

Man, I love this track! But the audio on this video seems off, really muddy?

Here's the extended version with much clearer sound.

[–] klu9@lemmy.ca 2 points 2 months ago

“Next time I’ve got a hot date I’m gonna hit that pounded yam early. I’m a skinny guy – I need that power in my back.” Tim Westwood, yam-mad and horny.

[–] klu9@lemmy.ca 3 points 2 months ago

Make sure to unsubscribe from any streaming services including and especially Netflix.

I don't disagree, but afaik Netflix's Reed Hastings is one of the few tech CEOs who has not been licking Trump's boots, and openly supported Harris. Unlike the bosses of Prime (Bezos) and YouTube (Sundar), who both obsequiously clapped along at the Dear Leader's inauguration that they donated to.

[–] klu9@lemmy.ca 1 points 2 months ago

This was a nice discovery, thanks!

[–] klu9@lemmy.ca 11 points 2 months ago

No, it counts as date night.

 

More on the Xai crimes in Memphis:

White-supremacy-born broligarch installs 35 unpermitted gas turbines and immediately becomes the city's single biggest NOx polluter and possibly biggest emitter of the carcinogen formaldehyde... in a black-majority neighbourhood that already has a life expectancy 12 years below the average of its very own county and cancer rates 4x the national average.

But fret not! Surely the environmental justice unit of the EPA will surely put a stop to this outrage. What? It was just disbanded? By who? The very same white-supremacy-born broligarch who installed... (continue ad ~~nauseam~~ revolutionem)

 

"There are three factors," he told The Register. "The first is really the unreliability, because we see what Trump is doing and the danger is that things will be just switched off from one day to another for negotiation purposes. Then we see the whole question around pricing with the tariffs.

"And then the other thing is really the espionage factor. This is relatively new and surprising to me ... but now you see what Musk is doing, that you can access really confidential databases ... I think this is a realistic fear nowadays."

9
submitted 3 months ago* (last edited 3 months ago) by klu9@lemmy.ca to c/pixelfed@lemmy.ca
 

Still new to the fediverse.

I see some posts on Pixelfed (I have a pixelfed.ca account) that come from people on Mastodon.

I can:

  • view those posts
  • like those posts
  • comment on those posts
  • see comments from other Pixelfed users
  • see occasional comments from Mastodon users
  • reply to comments from other Pixelfed users
  • follow the Mastodon user

All on Pixelfed (web).

But when, on pixelfed.ca, I click a Mastodon user's post's three-dot menu and then "View Post", it takes me to a Mastodon page and:

  • I see lots more comments that don't appear on pixelfed
  • I don't see my own comments
  • I don't see comments from anyone whose fediverse instance seems to be pixelfed-related

On Pixelfed.ca, I don't think I have ever seen:

  • a reply to me from a non-pixelfed user (but hey, I'm new, so maybe that's just me)
  • (not sure about replies to others from non-pixelfed users)

So my questions are:

  1. Can Mastodon users see/reply to comments from Pixelfed users?
  2. Can Pixelfed users see/reply to Mastodon users' comments on Pixelfed?
  3. Or do I need to get a Mastodon account to see / interact with them?
  4. Can I link to a specific post by a Mastodon user as it's shown on PIxelfed? For this post I wanted to link to an example post as it's shown on both Pixelfed and Mastodon, but the links I can get from Pixelfed ("View Post" or clicking on post time & date) go only to Mastodon. E.g. https://universeodon.com/@georgetakei/114371745785506928
  5. How is it decided which Mastodon posts get seen by Pixelfed users?
  6. Any other handy info on Pixelfed-Mastodon interaction?

PS I may not have covered all the possible interactions, or got them all right. Another one I just thought of: do we see each other's likes? I don't think on PIxelfed I see all the likes of Mastodon users.

 

New data reveals the hidden network of African workers powering AI, as they push for transparency from the global companies that employ them indirectly.

The broligarchy use subcontractors and sub-subcontractors to exploit workers in Africa with a veneer of deniability and to enrich... themselves, naturally. Violating workers' rights and data privacy along the way.

 

Following on from https://alex.lemmy.ca/lemmy.ca/comment/16014727

In almost every scifi-action movie of the 90s, there appears an exercise machine (?) with three concentric rings, and you strap into the middle and spin around in all directions. IIRC the outside ring is immobile and upright, while the inner two move on different axes, so the user can spin any which way.

Universal Soldier, Fortress, Drive and a bunch more had it, but I don't have screenshots.

So I've posted my terrible attempt at drawing what I mean. (Either that or the logo for my new political movement.)

Anyone know what it's called?

And whatever happened to it? It's the future now, why aren't we all spinning around in every direction, in between sessions on our hoverboards and flying cars?

EDIT:

Thanks, everyone. Turns out it's called an aerotrim.

So far, seen in the following 90s scifi movies:

  • Contact
  • Drive
  • Fortress
  • Gattaca
  • The Lawnmower Man
  • Universal Soldier
 
 

Some classic quotes of future snacks:

Mr Lopez is no big fan of Musk and is critical of some of his management practices and politics, but admires the technology his companies have built and is happy to live nearby as long as the companies are good neighbours.

"As long as they don't ruin my water or dig a tunnel beneath my house and create a sinkhole, this isn't bad," he says, gesturing around the metal shed housing the bodega, coffee shop and bar.

...

Bastrop, {city manager Sylvia Carrillo} says, is a conservative, traditionally Republican place.

"His national stuff doesn't really register," she says. "His companies have been good corporate citizens, and we hope it can stay that way."

His companies have been good corporate citizens...? Apparently they haven't heard what xAI has been up to in Memphis (1)(2). Or what SpaceX has done in the far away land of... Texas (3). Or even that their town's own residents had to fine and protest to stop the Boring Company dumping sludge into the local river.

But hey, at least you got a few jobs and...{checks article} an empty coffee shop out of it. Before Musk pivots to yet another jurisdiction that better meets his latest whims and sticks you with the long-term cleanup and consequences.

 

I also learned that Desi Arnaz chooses Rubber-Top!

https://en.wikipedia.org/wiki/Fatti_di_Rovereta

 

Cross-posted from "Seth Rogen’s Trump Jokes Are Edited Out of Awards Broadcast | Mr. Rogen said President Trump had “single-handedly destroyed all of American science.”" by @silence7@slrpnk.net in !nyt_gift_articles@sopuli.xyz


Tech oligarchs (Sergey Brin, Priscilla Chan and Mark Zuckerberg, Yuri and Julia Milner, and Anne Wojcicki) create a science prize, fund the inauguration of an anti-science president who appoints a fellow tech bro to gut govt funding of science, and then cut the part where someone points out the irony... "for time reasons"... from a YouTube video.

Jeff Bezos, Mark Zuckerberg and Sergey Brin all in attendance.

https://en.wikipedia.org/wiki/Breakthrough_Prize

 

Cross-posted from "What 𝘚𝘪𝘭𝘪𝘤𝘰𝘯 𝘝𝘢𝘭𝘭𝘦𝘺 Knew About Tech-Bro Paternalism" by @paywall@rss.ponder.cat in !theatlantic@rss.ponder.cat


Last fall, the consumer-electronics company LG announced new branding for the artificial intelligence powering many of its home appliances. Out: the “smart home.” In: “Affectionate Intelligence.” This “empathetic and caring” AI, as LG describes it, is here to serve. It might switch off your appliances and dim your lights at bedtime. It might, like its sisters Alexa and Siri, select a soundtrack to soothe you to sleep. The technology awaits your summons and then, unquestioningly, answers. It will make subservience environmental. It will surround you with care—and ask for nothing in return.

Affectionate AI, trading the paternalism of typical techspeak for a softer—or, to put it bluntly, more feminine—framing, is pretty transparent as a branding play: It is an act of anxiety management. It aims to assure the consumer that “the coming Humanity-Plus-AI future,” as a recent report from Elon University called it, will be one not of threat but of promise. Yes, AI overall has the potential to become, as Elon Musk said in 2023, the “most disruptive force in history.” It could be, as he put it in 2014, “potentially more dangerous than nukes.” It is a force like “an immortal dictator from which we can never escape,” he suggested in 2018. And yet, AI is coming. It is inevitable. We have, as consumers with human-level intelligence, very little choice in the matter. The people building the future are not asking for our permission; they are expecting our gratitude.

It takes a very specific strain of paternalism to believe that you can create something that both eclipses humanity and serves it at the same time. The belief is ripe for satire. That might be why I’ve lately been thinking back to a comment posted last year to a Subreddit about HBO’s satire Silicon Valley: “It’s a shame this show didn’t last into the AI craze phase.” It really is! Silicon Valley premiered in 2014, a year before Musk, Sam Altman, and a group of fellow engineers founded OpenAI to ensure that, as their mission statement put it, “artificial general intelligence benefits all of humanity.” The show ended its run in 2019, before AI’s wide adoption. It would have had a field day with some of the events that have transpired since, among them Musk’s rebrand as a T-shirt-clad oligarch and Altman’s bot-based mimicry of the 2013 movie Her.

Silicon Valley reads, at times, more as parody than as satire: Sharp as it is in its specific observations about tech culture, the show sometimes seems like a series of jokes in search of a punch line. It shines, though, when it casts its gaze on the gendered dynamics of tech—when it considers the consequential absurdities of tech’s arrogance.

The show doesn’t spend much time directly tackling artificial intelligence as a moral problem—not until its final few episodes. But it still offers a shrewd parody of AI, as a consumer technology and as a future being foisted on us. That is because Silicon Valley is highly attuned to the way power is exchanged and distributed in the industry, and to tech bros’ hubristic inclination to cast the public in a stereotypically feminine role.

Corporations act; the rest of humanity reacts. They decide; we comply. They are the creators, driven by competition, conquest, and a conviction that the future is theirs to shape. We are the ones who will live with their decisions. Silicon Valley does not explicitly predict a world of AI made “affectionate.” In a certain way, though, it does. It studies the men who make AI. It parodies their paternalism. The feminist philosopher Kate Manne argues that masculinity, at its extreme, is a self-ratifying form of entitlement. Silicon Valley knows that there’s no greater claim to entitlement than an attempt to build the future.

[Read: The rise of techno-authoritarianism]

The series focuses on the evolving fortunes of the fictional start-up Pied Piper, a company with an aggressively boring product—a data-compression algorithm—and an aggressively ambitious mission. The algorithm could lead, eventually, to the realization of a long-standing dream: a decentralized internet, its data stored not on corporately owned servers but on the individual devices of the network. Richard Hendricks, Pied Piper’s founder and the primary author of that algorithm, is a coder by profession but an idealist by nature. Over the seasons, he battles with billionaires who are driven by ego, pettiness, and greed. But he is not Manichean; he does not hew to Manne’s sense of masculine entitlement. He merely wants to build his tech.

He is surrounded, however, by characters who do fit Manne’s definition, to different degrees. There’s Erlich Bachman, the funder who sold an app he built for a modest profit and who regularly confuses luck with merit; Bertram Gilfoyle, the coder who has turned irony poisoning into a personality; Dinesh Chugtai, the coder who craves women’s company as much as he fears it; Jared Dunn, the business manager whose competence is belied by his meekness. Even as the show pokes fun at the guys’ personal failings, it elevates their efforts. Silicon Valley, throughout, is a David and Goliath story. Pied Piper is a tiny company trying to hold its own against the Googles of the world.

The show, co-created by Mike Judge, can be giddily adolescent about its own bro-ness (many of its jokes refer to penises). But it is also, often, insightful about the absurdities that can arise when men are treated like gods. The show mocks the tech executive who brandishes his Buddhist prayer beads and engages in animal cruelty. It skewers Valley denizens’ conspicuous consumption. (Several B plots revolve around the introduction of the early Tesla roadsters.) Most of all, the show pokes fun at the myopia displayed by men who are, in the Valley and beyond, revered as “visionaries.” All they can see and care about are their own interests. In that sense, the titans of tech are unabashedly masculine. They are callous. They are impetuous. They are reckless.

[Read: Elon Musk can’t stop talking about penises]

Their failings cause chaos, and Silicon Valley spends its seasons writing whiplash into its story line. The show swings, with melodramatic ease, between success and failure. Richard and his growing team—fellow engineers, investors, business managers—seem to move forward, getting a big new round of funding or good publicity. Then, as if on cue, they are brought low again: Defeats are snatched from the jaws of victory. The whiplash can make the show hard to watch. You get invested in the fate of this scrappy start-up. You hope. You feel a bit of preemptive catharsis until the next disappointment comes.

That, in itself, is resonant. AI can hurtle its users along similar swings. It is a product to be marketed and a future to be accepted. It is something to be controlled (OpenAI’s Altman appeared before Congress in 2023 asking for government regulation) and something that must not be contained (OpenAI this year, along with other tech giants, asked the federal government to prevent state-level regulation). Altman’s public comments paint a picture of AI that evokes both Skynet (“I think if this technology goes wrong, it can go quite wrong,” he said at the 2023 congressional hearing) and—as he said in a 2023 interview—a “magic intelligence in the sky.”

[Read: OpenAI goes MAGA]

The dissonance is part of the broader experience of tech—a field that, for the consumer, can feel less affectionate than addling. People adapted to Twitter, coming to rely on it for news and conversation; then Musk bought it, turned it into X, tweaked the algorithms, and, in the process, ruined the platform. People who have made investments in TikTok operate under the assumption that, as has happened before, it could go dark with the push of a button. To depend on technology, to trust it at all, in many instances means to be betrayed by it. And AI makes that vulnerability ever more consequential. Humans are at risk, always, of the machines’ swaggering entitlements. Siri and Alexa and their fellow feminized bots are flourishes of marketing. They perform meekness and cheer—and they are roughly as capable of becoming an “immortal dictator” as their male-coded counterparts.

By the end of Silicon Valley’s run, Pied Piper seems poised for an epic victory. The company has a deal with AT&T to run its algorithm over the larger company’s massive network. It is about to launch on millions of people’s phones. It is about to become a household name. And then: the twist. Pied Piper’s algorithm uses AI to maximize its own efficiency; through a fluke, Richard realizes that the algorithm works too well. It will keep maximizing. It will make its own definitions of efficiency. Pied Piper has created a decentralized network in the name of “freedom”; it has created a machine, you might say, meant to benefit all of humanity. Now that network might mean humanity’s destruction. It could come for the power grid. It could come for the apps installed in self-driving cars. It could come for bank accounts and refrigerators and satellites. It could come for the nuclear codes.

Suddenly, we’re watching not just comedy but also an action-adventure drama. The guys will have to make hard choices on behalf of everyone else. This is an accidental kind of paternalism, a power they neither asked for nor, really, deserve. And the show asks whether they will be wise enough to abandon their ambitions—to sacrifice the trappings of tech-bro success—in favor of more stereotypically feminine goals: protection, self-sacrifice, compassion, care.

I won’t spoil things by saying how the show answers the question. I’ll simply say that, if you haven’t seen the finale, in which all of this plays out, it’s worth watching. Silicon Valley presents a version of the conundrum that real-world coders are navigating as they build machines that have the potential to double as monsters. The stakes are melodramatic. That is the point. Concerns about humanity—even the word humanity—have become so common in discussions of AI that they risk becoming clichés. But humanity is at stake, the show suggests, when human intelligence becomes an option rather than a given. At some point, the twists will have to end. In “the coming Humanity-Plus-AI future,” we will have to find new ways of considering what it means to be human—and what we want to preserve and defend. Coders will have to come to grips with what they’ve created. Is AI a tool or a weapon? Is it a choice, or is it inevitable? Do we want our machines to be affectionate? Or can we settle for ones that leave the work of trying to be good humans to the humans?

​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic*.*


From The Atlantic via this RSS feed

 

Elon University is a private university in Elon, North Carolina, United States. Founded in 1889 as Elon College, the university is organized into six schools, most of which offer bachelor's degrees and several of which offer master's degrees or professional doctorate degrees.

https://en.wikipedia.org/wiki/Elon_University

view more: ‹ prev next ›