Pro

joined 3 months ago
MODERATOR OF
 

As the global population ages, older adults face growing psychological challenges such as loneliness, cognitive decline, and loss of social roles. Meanwhile, artificial intelligence (AI) technologies, including chatbots and voice-based systems, offer new pathways to emotional support and mental stimulation. However, older adults often encounter significant barriers in accessing and effectively using AI tools. This review examines the current landscape of AI applications aimed at enhancing psychological well-being among older adults, identifies key challenges such as digital literacy and usability, and highlights design and training strategies to bridge the digital divide. Using socioemotional selectivity theory and technology acceptance models as guiding frameworks, we argue that AI—especially in the form of conversational agents—holds transformative potential in reducing isolation and promoting emotional resilience in aging populations. We conclude with recommendations for inclusive design, participatory development, and future interdisciplinary research.

 

Despite QUIC handshake packets being encrypted, the Great Firewall of China (GFW) has begun blocking QUIC connections to specific domains since April 7, 2024. In this work, we measure and characterize the GFW’s censorship of QUIC to understand how and what it blocks. Our measurements reveal that the GFW decrypts QUIC Initial packets at scale, applies heuristic filtering rules, and uses a blocklist distinct from its other censorship mechanisms. We expose a critical flaw in this new system: the computational overhead of decryption reduces its effectiveness under moderate traffic loads. We also demonstrate that this censorship mechanism can be weaponized to block UDP traffic between arbitrary hosts in China and the rest of the world. We collaborate with various open-source communities to integrate circumvention strategies into Mozilla Firefox, the quic-go library, and all major QUIC-based circumvention tools.

 

Despite the extensive, highly detailed and good-faith engagements by rightsholder communities throughout this process, the final outcomes fail to address the core concerns which our sectors – and the millions of creators and companies active in Europe which we represent – have consistently raised. The result is not a balanced compromise; it is a missed opportunity to provide meaningful protection of intellectual property rights in the context of GenAI and does not deliver on the promise of the EU AI Act itself.

The feedback of the primary beneficiaries these provisions were meant to protect has been largely ignored in contravention of the objectives of the EU AI Act as determined by the co-legislators and to the sole benefit of the GenAI model providers that continuously infringe copyright and related rights to build their models. In 2024, the cultural and creative sectors across Europe welcomed the principles of responsible and trustworthy AI enshrined in the EU AI Act, intended to ensure mutually beneficial growth of innovation and creativity in Europe.

Today, with the EU AI Act implementing package as it stands, thriving cultural and creative sectors and copyright intensive industries in Europe which contribute nearly 7% of EU GDP, provide employment for nearly 17 million professionals and have an economic contribution larger than European pharmaceutical, automobile or high-tech industries, are being sold out in favour of those GenAI model providers. The deployment of GenAI models which also make extensive use of scraping is already underway. The damage to and unfair competition with the cultural and creative sectors can be seen each day. The cultural and creative sectors must be safeguarded, as they are the foundations of our cultures and the Single Market. We wish to make it clear that the outcome of these processes does not provide a meaningful implementation of the GPAI obligations under the AI Act.

We strongly reject any claim that the Code of Practice strikes a fair and workable balance or that the Template will deliver “sufficient” transparency about the majority of copyright works or other subject matter used to train GenAI models. This is simply untrue and is a betrayal of the EU AI Act’s objectives.

 
  • Artificial intelligence is driving the promise of autonomy in everything from self-driving cars, to digital health and smart cities.
  • True autonomy emerges from the convergence of sensing, connectivity, computing and control – not isolated intelligence.
  • Accordingly, biological intelligence must serve as the foundational design principle for building next-generation autonomous systems.
 
  • Artificial intelligence is driving the promise of autonomy in everything from self-driving cars, to digital health and smart cities.
  • True autonomy emerges from the convergence of sensing, connectivity, computing and control – not isolated intelligence.
  • Accordingly, biological intelligence must serve as the foundational design principle for building next-generation autonomous systems.
 
  • AI represents a transformative phase akin to the Renaissance or Industrial Revolution, with opportunities and risks across various industries.
  • Ultimately, humans must decide how AI unfolds in the best way that benefits humans first.
  • A collective effort across society is essential to harness the best use cases of human-first AI, enabling it to reach its full potential.
 

Uber Canada says it has updated its safety protocols for emergency situations after an incident in March where company representatives refused to contact a driver after he drove off with a child.

Julia Viscomi said Uber customer support refused to help her or Toronto police contact the driver after he left with her 5-year-old daughter asleep in the backseat in North York, CBC Toronto reported in April.

Police ended up finding the child without receiving help from Uber, about an hour and a half after the driver left with her, Viscomi said.

 

Sada Social Center expresses its deep concern and strong condemnation regarding TikTok’s appointment of Erica Mindel—a former instructor in the Israeli army’s Armored Corps—as the platform’s new Manager of Hate Speech Policy.

According to reports reviewed by Sada Social, Mindel previously worked with the U.S. State Department under Ambassador Deborah Lipstadt, the Biden Administration’s Special Envoy to Monitor and Combat Antisemitism. Prior to that, she served as an instructor in the Israeli army’s Spokesperson’s Unit. In her new role, Mindel will be tasked with formulating TikTok’s hate speech policies, shaping relevant legislative and regulatory frameworks, and monitoring trends—particularly those related to antisemitic content.

Sada Social views this appointment as a highly concerning indicator for the future of digital freedoms for Palestinians. The center warns of the serious implications that Mindel’s military background may have on TikTok’s moderation practices, especially regarding Palestinian reports of incitement, bias, and the silencing of their narrative. Assigning someone affiliated with an army currently under international investigation for genocide in Gaza to lead hate speech policy only entrenches existing biases and undermines the principles of fairness and digital justice.

Sada Social’s 2024 Digital Index revealed that 27% of all digital violations targeting Palestinian content occurred on TikTok. According to TikTok’s own transparency report for the second half of 2024, the platform complied with 94% of the Israeli government’s content removal requests, all while imposing strict censorship on Palestinian content. This included the deletion of videos with clear journalistic value, and the targeting of accounts belonging to journalists, media outlets, activists, and supporters of the Palestinian cause.

Sada Social also underscores that TikTok has failed to undertake any meaningful internal review of its policies, even after the South African government submitted video evidence to the International Court of Justice (ICJ)—footage that was published on TikTok and depicted Israeli soldiers celebrating the destruction of Palestinian homes, mocking victims, and writing messages on bombs before they were dropped on Gaza. Instead of responding to these disturbing violations, TikTok has continued its partnerships with a political and military regime currently under international investigation.

[–] Pro@programming.dev 4 points 1 month ago (1 children)

Not using it even with a gun pointed at my head.

Why? You can use it to send messages to random numbers till the gun is pointed elsewhere.

[–] Pro@programming.dev 1 points 1 month ago

This post had been removed. Paywalls are not allowed. Please check the rules.

[–] Pro@programming.dev 1 points 1 month ago

Hi, this post had been removed as TechCrunch is blacklisted. Please checkout the rules.

[–] Pro@programming.dev -3 points 1 month ago (2 children)

they're just going to lower the quality of their products.

Great, you as user of the service/ product can make your choice to not deal with companies that use AI.

The rest of users can enjoy more choices as they might simply prefer AI.

AI in the strict sense doesn't exist yet.

WTF?

[–] Pro@programming.dev -2 points 1 month ago (5 children)

My dude, don't put words in my mouth.

I said Protesting AI is dumb , not all protesting is dumb.

Also, when did protesting AI had actually worked to achieve real improvement?

[–] Pro@programming.dev -4 points 1 month ago (4 children)

2 things: AI is wider than LLMs and if you were even remotely correct then artists and writers would not protest it at all.

[–] Pro@programming.dev 2 points 1 month ago* (last edited 1 month ago) (1 children)

One problem I see in your way of thought here, adblock use among social media users will never reach 100%.

Furthermore, adblockers are getting weaker with Google Chrome MV3. All of this leads to the logical conclusion in my eyes that you can only change the sources that power users use which will eventually lead to better privacy for all people involved. You will never be able to control people setups to be super private.

[–] Pro@programming.dev 6 points 1 month ago* (last edited 1 month ago) (5 children)

Said that on Social Media

[–] Pro@programming.dev 4 points 1 month ago

Olvid, for people close to me. Signal for strangers.

view more: ‹ prev next ›