this post was submitted on 29 Jan 2026
272 points (100.0% liked)

Cybersecurity

9555 readers
111 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 2 years ago
MODERATORS
all 48 comments
sorted by: hot top controversial new old
[–] ptz@dubvee.org 128 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I'm not a tin-foil hatter by any stretch of the imagination, but this has long been my assumption on why "AI" is being pushed down our throats so hard and from so many angles.

It's almost the perfect spyware, really.

[–] UnspecificGravity@piefed.social 61 points 2 weeks ago (2 children)

There is a reason the FIRST google implementation of AI was to just read all your emails and give you shitty inaccurate summaries of the content.

[–] minorkeys@lemmy.world 15 points 2 weeks ago

Like they're barely trying to give you a product justification for invasively spying on you while you use your own computer.

[–] Orygin@sh.itjust.works 0 points 2 weeks ago

You mean Gemini reading your emails? That's way after Bard was a thing.
Plus, Apple AI is basically at the same level still.

[–] jet@hackertalks.com 9 points 2 weeks ago

If I control your agent, I control what you see, what you say, where you go, everything about your life that touches a computer.....

[–] redsand 78 points 2 weeks ago (1 children)

Embedding AI in the operating system instead of as a normal program is something that should be punished.

Repeat irresponsible disclosure will not get you paid the same but will fix the architectural problem faster.

[–] Pika@sh.itjust.works 4 points 2 weeks ago (1 children)

I expect that eventually windows will be anti-trusted again by established nations. we haven't seen it since explorer but, eventually it will happen again.

[–] ColeSloth@discuss.tchncs.de 7 points 2 weeks ago (1 children)

It's already been happening. It's finally, actually, for reals, the year for Linux.

Meme aside, countries have started to get off the Microsoft tit.

[–] Pika@sh.itjust.works 4 points 2 weeks ago

It will happen guys! I swear! 🗞️

[–] noxypaws@pawb.social 66 points 2 weeks ago (1 children)

She's right. End to end encryption doesn't mean a lot when your own device can't be trusted to not capture screenshots or store the contents of push notifications.

[–] PieMePlenty@lemmy.world 2 points 2 weeks ago (3 children)

We just need biologically accelerated decryption mechanisms in our brain so we can read encrypted data directly. Keys are safely stored in a new organ which gets implemented at birth.

[–] emeralddawn45@lemmy.dbzer0.com 4 points 2 weeks ago

"Sorry, your neural architecture is incompatible with 2028 society unless you opt in to this neuralink cerebral TPM which will allow for communication and decryption of all new media. Without this upgrade you will be limited to only communicate with legacy users and consume only vintage advertising content."

[–] noxypaws@pawb.social 2 points 2 weeks ago

Cool concept <3

[–] homesweethomeMrL@lemmy.world 37 points 2 weeks ago

Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.

This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O’Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal’s encryption.

[–] kbal@fedia.io 32 points 2 weeks ago (2 children)

I suppose her attention is naturally focused on encryption, but the result of an untrustworthy operating system is not specific to it: Security in general becomes impossible.

[–] UnspecificGravity@piefed.social 22 points 2 weeks ago

Her business is secure communication and communication isn't secure (and can't be secured) if you have someone reading everything over your shoulder.

[–] KSPAtlas@sopuli.xyz 1 points 2 weeks ago

I'm curious: is there any operating system where a program can somehow inherently trust it via some form of verification?

[–] reksas@sopuli.xyz 25 points 2 weeks ago

if you operating system is compromised, you cant make it secure no matter what. Just like if thief has keys to your home, no amount of security will make your home safe. At best you might know if you have been burgled.

So its either using non-compromised operating systems or just submitting to the fact that you have no safety.

[–] pinball_wizard@lemmy.zip 18 points 2 weeks ago* (last edited 2 weeks ago)

The headline sounds all spy tech: "Advances in AI break your best encryption!"

But then the article reminds us we are in the stupidest of all possible timelines:

"Embedding AI into the operating system is such a monumentally idiotic thing to do, that no amount of other security controls can save us."

[–] NickwithaC@lemmy.world 12 points 2 weeks ago (1 children)
[–] Kaz@lemmy.org 24 points 2 weeks ago (3 children)

This might be what your looking for, next phone wipe I'm putting this on:

https://grapheneos.org/

[–] ryannathans@aussie.zone 7 points 2 weeks ago

They are working with an OEM to make an entire phone so stay tuned in that space

[–] Typotyper@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Just don't. The only thing I've missed is a debit/ credit tap wallet and an app that won't process my credit card purchase for in account credits. I haven't looked too hard for a techy solution to that one.

Edit I meant to type "just do it" but...typo

[–] white_nrdy@programming.dev 5 points 2 weeks ago (1 children)

Did you mean "just do it" and autocorrect got you? Based on the rest of the text, that is what I figure.

[–] Typotyper@sh.itjust.works 2 points 2 weeks ago (1 children)

Lol yes. Just do it. I'd like to blame the early hour for my typos but they are a chronic thing.

I'm on a 9 after leaving an iPhone 15pro. IOS 26 drove me away. I spent ten years on iOS. Few habits to break, quirks that are different.

[–] white_nrdy@programming.dev 3 points 2 weeks ago

they are a chronic thing

I assumed so, given your username 🤣

I myself am rocking a 7. Put GOS on over the summer. Don't look back at all!

[–] nodiratime@lemmy.world 2 points 2 weeks ago (1 children)

Just use curve. Also, why do you tell others to not use it if only one thing is not working for you?

[–] _Nico198X_@europe.pub 6 points 2 weeks ago

his username.

[–] village604@adultswim.fan 1 points 2 weeks ago

Don't spend too much time on it. It's based on Android and Google is fucking up their access.

[–] TribblesBestFriend@startrek.website 11 points 2 weeks ago (5 children)
[–] Voroxpete@sh.itjust.works 64 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

~~He's~~ She's talking specifically about the idea of embedding AI agents in operating systems, and allowing them to interact with the OS on the user's behalf.

So if you think about something like Signal, the point is that as it leaves your device the message is encrypted, and only gets decrypted when it arrives on the device of the intended recipient. This should shut down most "Man in the middle" type of attacks. It's like writing your letters in code so that if the FBI opens them, they can't read any of it.

But when you add an AI agent in the OS, that's like dictating your letter to an FBI agent, and then encrypting it. Kind of makes the encryption part pointless.

[–] eleijeep@piefed.social 7 points 2 weeks ago (1 children)

He’s talking specifically

She*

[–] Voroxpete@sh.itjust.works 4 points 2 weeks ago

My bad. Thanks for the correction.

[–] MeThisGuy@feddit.nl 6 points 2 weeks ago

like using Gboard?

[–] French75@slrpnk.net 22 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Encrypted apps like Signal encrypt messages in a way that only you and the recipient can decrypt and read. Not even Signal can decrypt them. However it has always been the case that another person could look over your shoulder and read the messages you send, who you're sending them to, and so on. Pretty obvious, right?

What the author and Signal are calling out here is that all major commercial OSes are now building in features that "look over your shoulder." But it's worse than that because they also record every other device sensor's data.

Windows Recall is the easiest to understand. It is a tool build into windows (and enabled by default) that takes a screenshot a few times per second. This effectively capture a stream of everything you do while using windows; what you browse, who you chat with, the pron you watch, the games you play, where you travel, and who you travel with or near. If you use "private" message tools like Signal, they'll be able to see who you are messaging and read the conversations, just as if they were looking over your shoulder, permanently.

They claim that for an AI agent to serve you well, it needs to know everything it can about you. They also make dubious claims that they'll never use any of this against you, but they also acknowledge that they comply with court orders and government requests (to varying degrees). So... if you trust all of these companies and every government in the world, there's nothing to worry about.

[–] trevor@lemmy.blahaj.zone 21 points 2 weeks ago (1 children)

"Agentic" LLMs are turning garbage operating systems, like Microslop Winblows, into hostile and untrusted environments where applications need to run. A primary example given is how Recall constantly captures your screen and turns the image data into text that can be processed by Microslop, thus making the fact that Signal is end-to-end encrypted largely irrelevant, since your OS is literally shoulder-surfing you at all times. This is made worse by the fact that the only workaround that application developers can use to defend against this surveillance is to implement OS DRM APIs, which are also controlled by the hostile entity.

[–] kingofras@lemmy.world 14 points 2 weeks ago (1 children)

During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user’s behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system

[–] UnspecificGravity@piefed.social 9 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Your operating system and half the software you use has integrated spyware that can read anything you see on your computer or phone as free text and use that information to notify state actors or just whoever the fuck they want of the contents. It doesn't matter that the message was encrypted between you and the other person when they can spy directly on your device.

Its like passing a coded note to a friend in class and then they open it and just read it out loud to everyone sitting there. Didn't really matter that you encoded it.

[–] rizzothesmall@sh.itjust.works 0 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Is that because code will be so fucking unintentionally obfuscated that even admins will never be able to recover secrets?

[–] new_guy@lemmy.world 7 points 2 weeks ago* (last edited 2 weeks ago)

No. It's because in order to AI agents to work they need access to the content being transmitted on each end of the communication.