ForgottenFlux

joined 2 years ago
 

Encryption can’t protect you from adding the wrong person to a group chat. But there is also a setting to make sure you don’t.

You can add your own nickname to a Signal contact by clicking on the person’s profile picture in a chat with them then clicking “Nickname.” Signal says “Nicknames & notes are stored with Signal and end-to-end encrypted. They are only visible to you.” So, you can add a nickname to a Jason saying “co-founder,” or maybe “national security adviser,” and no one else is going to see it. Just you. When you’re trying to make a group chat, perhaps.

Signal could improve its user interface around groups and people with duplicate display names.

 

A contractor for Immigration and Customs Enforcement (ICE) and many other U.S. government agencies has developed a tool that lets analysts more easily pull a target individual’s publicly available data from a wide array of sites, social networks, apps, and services across the web at once, including Bluesky, OnlyFans, and various Meta platforms, according to a leaked list of the sites obtained by 404 Media. In all the list names more than 200 sites that the contractor, called ShadowDragon, pulls data from and makes available to its government clients, allowing them to map out a person’s activity, movements, and relationships.

ShadowDragon says in marketing material its tools can be used to monitor protests, and claims it found protests around Union Station in Washington DC during a 2023 visit by Benjamin Netanyahu. Daniel Clemens, ShadowDragon’s CEO, previously said on a podcast that protesters should not “be surprised when people are going to investigate you because you made their life difficult.”

“The long list of sites and services that ShadowDragon’s SocialNet tool accesses is a reminder of just how much data is accessible and collected from and about us to provide surveillance services to the government and others,” Jeramie Scott, senior counsel and director the Electronic Privacy Information Center’s (EPIC) Project on Surveillance Oversight, told 404 Media in an email. “SocialNet is just one example of the unchecked surveillance ecosystem that lacks any meaningful transparency, oversight, or accountability that allows the government to circumvent Constitutional and statutory protections to access sensitive personal data,” he added.

The leaked list of targeted sites and services include ones from major tech companies such as Apple, Amazon, Meta, Microsoft, and TikTok. It also includes communication tools like Discord and WhatsApp; activity- or hobby-focused sites like AllTrails, BookCrossing, Chess.com, and cigar review site Cigar Dojo; payment services like Cash App, BuyMeACoffee, and PayPal; sex worker sites OnlyFans and JustForFans; and social networks Bluesky and Telegram. Even relatively obscure social networks are included in the list, such as BeReal.

 

Apple reportedly filed an appeal in hopes of overturning a secret UK order requiring it to create a backdoor for government security officials to access encrypted data.

"The iPhone maker has made its appeal to the Investigatory Powers Tribunal, an independent judicial body that examines complaints against the UK security services, according to people familiar with the matter," the Financial Times reported today. The case "is believed to be the first time that provisions in the 2016 Investigatory Powers Act allowing UK authorities to break encryption have been tested before the court," the article said.

Although it wasn't previously reported, Apple's appeal was filed last month at about the time it withdrew ADP from the UK, the Financial Times wrote today.

"The case could be heard as soon as this month, although it is unclear whether there will be any public disclosure of the hearing," the FT wrote. "The government is likely to argue the case should be restricted on national security grounds."

 

At launch, access to Mullvad Leta was restricted to users with a paid Mullvad VPN account, but it is now free and open to all.

Mullvad Leta has been audited by Assured.

Just a heads up, some of the details in the FAQ and Terms of Service seem a bit outdated and might not be accurate anymore.

Some relevant information from their FAQ section is as follows:

What can I do with Leta?

Leta is a search engine. You can use it to return search results from many locations. We provide text search results, currently we do not offer image, news or any other types of search result. Leta acts as a proxy to Google and Brave search results. You can select which backend search engine you wish to use from the homepage of Leta.

Can I use Leta as my default search engine?

Yes, so long as your browser supports changing default search engines.

Navigate to https://leta.mullvad.net/ in your browser and right-click on the URL bar.

From there you should see Add “Mullvad Leta“ with the Mullvad VPN logo to the left.

If you do not see this, you can attempt to add a custom search engine to your browser with:

You can select which backend engine to use as follows:

Did you make your own search engine from scratch?

We did not, we made a front end to the Google and Brave Search APIs.

Our search engine performs the searches on behalf of our users. This means that rather than using Google or Brave Search directly, our Leta server makes the requests.

Searching by proxy in other words.

What is the point of Leta?

Leta aims to present a reliable and trustworthy way of searching privately on the internet.

However, Leta is useless as a service if you use the perfect non-logging VPN, a privacy focussed DNS service, a web browser that resists fingerprinting, and correlation attacks from global actors. Leta is also useless if your browser blocks all cookies, tracking pixels and other tracking technologies.

For most people Leta can be useful, as the above conditions cannot ever truly be met by systems that are available today.

What is a cached search?

We store every search in a RAM based cache storage (Redis), which is removed after it reaches over 30 days in age.

Cached searches are fetched from this storage, which means we return a result that can be from 0 to 30 days old. It may be the case that no other user has searched for something during the time that you search, which means you would be shown a stale result.

What happens to everything I search for?

Your searches are performed by proxy, it is the Leta server that makes calls to the Google or Brave Search API.

Each search that has not already been cached is saved in RAM for 30 days. The idea is that the more searches performed, the larger and more substantial the cached results become, therefore aiding with privacy.

All searches will be stored hashed with a secret in a cache. When you perform a search the cache will be checked first, before determining whether a direct call to Google or Brave Search should be made. Each time the Leta application is restarted (due to an upgrade, or new version) server side, a new secret hash is generated, meaning that all previous search queries are no longer visible to Leta

What could potentially be a unique search would become something that many other users would also search for.

What is running on the server side?

We run the Leta servers on STBooted RAM only servers, the same as our VPN servers. These servers run the latest Ubuntu LTS, with our own stripped down custom Mullvad VPN kernel which we tune in-house to remove anything unnecessary for the running system.

The cached search results are stored in an in-memory Redis key / value store.

The Leta service is a NodeJS based application that proxies requests to Google or Brave Search, or returns them from cache.

We gather metrics relating to the number of cached searches, vs direct searches, solely to understand the value of our service.

Additionally we gather information about CPU usage, RAM usage and other such information to keep the service running smoothly.

 

Firefox maker Mozilla deleted a promise to never sell its users' personal data and is trying to assure worried users that its approach to privacy hasn't fundamentally changed. Until recently, a Firefox FAQ promised that the browser maker never has and never will sell its users' personal data. An archived version from January 30 says:

Does Firefox sell your personal data?

Nope. Never have, never will. And we protect you from many of the advertisers who do. Firefox products are designed to protect your privacy. That's a promise.

That promise is removed from the current version. There's also a notable change in a data privacy FAQ that used to say, "Mozilla doesn't sell data about you, and we don't buy data about you."

The data privacy FAQ now explains that Mozilla is no longer making blanket promises about not selling data because some legal jurisdictions define "sale" in a very broad way:

Mozilla doesn't sell data about you (in the way that most people think about "selling data"), and we don't buy data about you. Since we strive for transparency, and the LEGAL definition of "sale of data" is extremely broad in some places, we've had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable) is stripped of any identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP).

Mozilla didn't say which legal jurisdictions have these broad definitions.

 

Hot off the back of its recent leadership rejig, Mozilla has announced users of Firefox will soon be subject to a ‘Terms of Use’ policy — a first for the iconic open source web browser.

This official Terms of Use will, Mozilla argues, offer users ‘more transparency’ over their ‘rights and permissions’ as they use Firefox to browse the information superhighway — as well well as Mozilla’s “rights” to help them do it, as this excerpt makes clear:

You give Mozilla all rights necessary to operate Firefox, including processing data as we describe in the Firefox Privacy Notice, as well as acting on your behalf to help you navigate the internet.

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

Also about to go into effect is an updated privacy notice (aka privacy policy). This adds a crop of cushy caveats to cover the company’s planned AI chatbot integrations, cloud-based service features, and more ads and sponsored content on Firefox New Tab page.

 

Signal CEO Meredith Whittaker says her company will withdraw from countries that force messaging providers to allow law enforcement officials to access encrypted user data, as Sweden continues to mull such plans.

She made the claims in an interview with Swedish media SVT Nyheter which reported the government could legislate for a so-called E2EE backdoor as soon as March 2026. It could bring all E2EE messenger apps like Signal, WhatsApp, iMessage, and others into scope.

Whittaker said there is no such thing as a backdoor for E2EE "that only the good guys can access," however.

"Either it's a vulnerability that lets everyone in, or we continue to uphold strong, robust encryption and ensure the right to privacy for everyone. It either works for everyone or it's broken for everyone, and our response is the same: We would leave the market before we would comply with something that would catastrophically undermine our ability to provide private communications."

Sweden launched an investigation into its data retention and access laws in 2021, which was finalized and published in May 2023, led by Minister of Justice Gunnar Strömmer.

Strömmer said it was vital that law enforcement and intelligence agencies were able to access encrypted messaging content to scupper serious crime – the main argument made by the UK in pursuing its long-term ambition to break E2EE.

The inquiry made several proposals to amend existing legislation, including the recommendation that encrypted messaging must store chat data for up to two years and make it available to law enforcement officials upon request.

It would essentially mirror the existing obligation for telecoms companies to provide call and SMS data to law enforcement, as is standard across many parts of the developed world, but extend it to encrypted communications providers.

 

Proton: “We’re consolidating our social media presence due to limited resources and no longer posting on Mastodon. Follow us on Reddit for the latest updates”

 

Bitwarden isn't going proprietary after all. The company has changed its license terms once again – but this time, it has switched the license of its software development kit from its own homegrown one to version three of the GPL instead.

The move comes just weeks after we reported that it wasn't strictly FOSS any more. At the time, the company claimed that this was just a mistake in how it packaged up its software, saying on Twitter:

It seems like a packaging bug was misunderstood as something more, and the team plans to resolve it. Bitwarden remains committed to the open source licensing model in place for years, along with retaining a fully featured free version for individual users.

Now it's followed through on this. A GitHub commit entitled "Improve licensing language" changes the licensing on the company's SDK from its own license to the unmodified GPL3.

Previously, if you removed the internal SDK, it was no longer possible to build the publicly available source code without errors. Now the publicly available SDK is GPL3 and you can get and build the whole thing.

 
  • PayPal to Share Shopping Details
  • LinkedIn Opts You In for AI Data Sharing
  • 23andMe May Sell Your DNA Data
 

A new Federal Trade Commission (FTC) report confirms what EFF has been warning about for years: tech giants are widely harvesting and sharing your personal information to fuel their online behavioral advertising businesses. This four-year investigation into the data practices of nine social media and video platforms, including Facebook, YouTube, and X (formally Twitter), demonstrates how commercial surveillance leaves consumers with little control over their privacy. While not every investigated company committed the same privacy violations, the conclusion is clear: companies prioritized profits over privacy.

While EFF has long warned about these practices, the FTC’s investigation offers detailed evidence of how widespread and invasive commercial surveillance has become. Here are key takeaways from the report

 

LinkedIn users in the U.S. — but not the EU, EEA, or Switzerland, likely due to those regions’ data privacy rules — have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train “content creation AI models.” The toggle isn’t new. But, as first reported by 404 Media, LinkedIn initially didn’t refresh its privacy policy to reflect the data use.

The terms of service have now been updated, but ordinarily that occurs well before a big change like using user data for a new purpose like this. The idea is it gives users an option to make account changes or leave the platform if they don’t like the changes. Not this time, it seems.

To opt out of LinkedIn’s data scraping, head to the “Data Privacy” section of the LinkedIn settings menu on desktop, click “Data for Generative AI improvement,” then toggle off the “Use my data for training content creation AI models” option. You can also attempt to opt out more comprehensively via this form, but LinkedIn notes that any opt-out won’t affect training that’s already taken place.

The nonprofit Open Rights Group (ORG) has called on the Information Commissioner’s Office (ICO), the U.K.’s independent regulator for data protection rights, to investigate LinkedIn and other social networks that train on user data by default.

“LinkedIn is the latest social media company found to be processing our data without asking for consent,” Mariano delli Santi, ORG’s legal and policy officer, said in a statement. “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

view more: next ›