Technology

327 readers
289 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
176
 
 

Original article published by CIPESA under Creative Commons Attribution 4.0 license.

In a July 18, 2025 decision, Uganda’s Personal Data Protection Office (PDPO) found Google LLC in breach of the country’s data protection law and ordered the global tech giant to register with the local data protection office within 30 days.

The decision would place the popular search engine under the ambit of Uganda’s Data Protection and Privacy Act, whose provisions it would have to comply with. In particular, the PDPO has ordered Google to provide – within 30 days – documentary evidence of how it is complying with requirements for transferring the personal data of Ugandan citizens outside of the country’s borders. Google also has to explain the legal basis for making those cross-border data transfers and the accountability measures in place to ensure that such transfers respect Uganda’s laws.

The orders followed a November 2024 complaint by four Ugandans, who argued that as a data collector, controller, and processor, Google had failed to register with the PDPO as required by local laws. They also contended that Google unlawfully transferred their personal data outside Uganda without meeting the legal conditions enshrined in the law, and claimed these actions infringed their data protection and privacy rights and caused them distress.

The PDPO ruled that Google was indeed collecting and processing personal data of the complainants without being registered with the local data regulator, which contravened section 29 of the Data Protection and Privacy Act. Google was also found liable for transferring the complainants’ data across Uganda’s borders without taking the necessary safeguards, in breach of section 19 of the Act.

This section provides that, where a data processor or data controller based in Uganda processes or stores personal data outside Uganda, they must ensure that the country in which the data is processed or stored has adequate measures for protecting the data. Those measures should at least be equivalent to the protection provided for under the Ugandan law. The consent of the data subject should also be obtained for their data to be stored outside Uganda.

In its defence, Google argued that since it was not based in Uganda and had no physical presence in the country, it was not obliged to register with the PDPO, and the rules on cross-border transfers of personal data did not apply to it. However, the regulator rejected this argument, determining that Google is a local data controller since it collects data from users in Uganda and decides how that data is processed.

The regulator further determined that the local data protection law has extra-territorial application, as it states in section 1 that it applies to a person, institution or public body outside Uganda who collects, processes, holds or uses personal data relating to Ugandan citizens. Accordingly, the regulator stated, the law places obligations “not only to entities physically present in Uganda but to any entity handling personal data of Ugandan citizens, including those established abroad, provided they collect or process such data.”

The implication of this decision is that all entities that collect Ugandans’ data, including tech giants such as Meta, TikTok, and X, must register with the Ugandan data regulator. This decision echoes global calls to hold Big Tech more accountable, and for African countries to have strong laws as per African Union (AU) Convention on Cyber Security and Personal Data Protection (Malabo Convention), and the AU Data Policy Framework.

However, enforcement of these orders remains a challenge. For instance, Uganda’s PDPO does not make binding decisions and only makes declaratory orders. Additionally, the regulator does not have powers to make orders of compensation to aggrieved parties, and indeed did not do so under the current decision. It can only recommend that the complainants engage a court of competent jurisdiction, in accordance with section 33(1) of the Act.

Conversely, the Office of the Data Protection Commissioner of Kenya established by section 5 of Data Protection Act, 2019 and the Personal Data Protection Commission of Tanzania established by section 6 of the Protection of Personal Information Act, 2022 are bestowed with powers to issue administrative fines under sections 9(1)(f) and section 47 respectively.

The dilemma surrounding the Uganda PDPO presents major concerns about its capacity to remedy wrongs of global data collectors, controllers and processors. Among its declarations in the July 2025 decision was that it would not issue an order for data localisation “at this stage” but “Google LLC is reminded that all cross-border transfers of personal data must comply fully with Ugandan law”. This leaves unanswered questions over data sovereignty and respect for individuals’ data rights given the handicaps faced by data regulators in countries such as Uganda and the practicalities presented by the global digital economy.

In these circumstances, Uganda’s Data Protection and Privacy Act should be amended to expand the powers of PDPO to impose administrative fines so as to add weight and enforceability to its decisions.

177
 
 

Using widely available technology, well-known ethical hackers Chris Kubecka and Paula Popovici quickly accessed numerous pornography sites without ever verifying their ages.

178
 
 

Tech firms must introduce age checks to prevent children from accessing porn, self-harm, suicide and eating disorder content Bluesky, Discord, Grindr, Reddit and X among latest firms to commit to age-gating, while Ofcom lines up targets for enforcement

Sites and apps where children spend most time must make their feeds safer Sites and apps which allow harmful content must protect children from accessing it from the end of this week, Ofcom has warned, as the deadline approaches for tech firms to comply with new rules.

179
180
 
 
181
182
183
184
 
 

Microsoft says it will no longer use China-based engineers to support the Pentagon. But ProPublica found that the tech giant has relied on its global workforce for years to support other federal clients, including the Justice Department.

185
186
 
 
187
188
189
190
 
 

191
192
193
 
 
194
 
 

A newly established Hungarian company is spending hundreds of thousands of euros on advertisements attacking Hungary’s opposition leader Péter Magyar and Ukrainian President Volodymyr Zelensky—far exceeding its reported income and without revealing the source of its funding. Meta eventually removed a wave of similar ads targeting Viktor Orbán’s opponents for violating its terms of service—but only after profiting from displaying them to millions of users.

195
196
197
198
 
 

Data centers, cornerstones of the EU’s digital ambitions, are expanding rapidly across the continent. Public responses to this digital prioritization vary dramatically. Some communities embrace them as engines of development; others push back, citing threats to the environment and their quality of life.

199
 
 

During yesterday's "Winning the AI Race" summit, President Trump weighed in on the debate surrounding AI and copyright, noting that it is "not doable" for AI companies to pay for all copyrighted content used in model training. This stance, shared amidst ongoing AI copyright lawsuits, aims to keep the U.S. competitive in the global AI landscape, especially against countries like China.

200
 
 

To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work.

“The AI revolution has arrived,” Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months.

“We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,” he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable.

Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals.

Yet behind the scenes, the agency’s slick AI project has been greeted with a shrug — or outright alarm.

Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates.

But it has also made up nonexistent studies, known as AI “hallucinating,” or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said.

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one employee — a far cry from what has been publicly promised.

“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” to check for fake or misrepresented studies, a second FDA employee said.

Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That’s because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information.

All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine.

“The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,” the FDA said in a statement on its launch in June.

But speaking to CNN at the FDA’s White Oak headquarters this week, Makary says that right now, most of the agency’s scientists are using Elsa for its “organization abilities” like finding studies and summarizing meetings.

The FDA’s head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies.

“Elsa is no different from lots of [large language models] and generative AI,” he told CNN. “They could potentially hallucinate.”

Walsh also said Elsa’s shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries.

Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI.

“I have not heard those specific concerns, but it’s optional,” he said. “They don’t have to use Elsa if they don’t find it to have value.”

Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: “You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.”

view more: ‹ prev next ›