On the HN thread one of the matrix contributors responded. I'll copy his response here incase anyone wants to see what their response to these points are:
Another day, another matrix hit piece on the front page of HN. Unsure whether it's really my job as matrix lead to respond, but hey, let's go again.
TL;DR: the only valid points here really are complaints about state resets (being addressed in https://matrix.org/blog/2025/07/security-predisclosure/) and canonical json edge cases (which are on the radar). We should probably also remove device_display_names entirely. Stuff about "you have to trust other people's servers when you ask them to delete data!" is not exactly earth-shattering, and the encryption & authenticated media issues mentioned got fixed in 2024.
Point by point:
- the graph is append-only by design
Nope, Matrix rooms are designed to let server prune old data if they want - https://element-hq.github.io/synapse/latest/message_retentio... is how you configure Synapse for it, for instance. The DAG can also have gaps in it (see point 6 below).
- if you do want to delete something, you can send a redaction event which asks other servers very nicely to delete the content of the event, but redactions are advisory
If you ask a server to delete data, you have to trust it actually deletes it. That goes for any protocol; it's nothing to do with Matrix.
- however, servers that choose to ignore redactions, or fail to process them for some other reason, can leak supposedly-deleted data to other servers later on.
see above.
- certain events, like membership changes, bans or pretty much any event that exercises some control over another user can't be deleted ever as they become woven into the "auth chain" of future events
This one's almost true. The fact that "events which exercise control over another use" (i.e. access control) can't be deleted should not be surprising, given access control that doesn't disappear from under you is generally considered a good thing. However, if you really do want to delete it, you could 'upgrade' the room by pointing it to a new room ID, and vape the previous one (although admittedly there's no 'vape room' API yet).
- the only way to discard all of this spam complexity is to recreate the room.
...or upgrade it, which is increasingly a transparent operation (we've been doing a bunch of work on it in preparation for https://matrix.org/blog/2025/07/security-predisclosure/). Meanwhile, mitigating state spam is part of the scope of the ongoing security work mentioned there.
- it's exceptionally hard to linearize history if you don’t know the entire history of the room partially.
Yup, this is a feature. We don't want servers to have to sync full room history; they're allowed to do it in chunks. The tradeoff is that ordering the chunks is a heuristic, although we're currently in the process of improving that.
- it is also somewhat possible to insert messages into history by crafting events in the graph that refer to older ancestor events
Decentralisation means that servers are allowed to branch from old commits (in git parlance), much like git. This is desirable if you're handling delayed traffic from a network partition or outage; we're working on avoiding it in other scenarios.
- another thing that is worth noting is that end-to-end encryption in matrix is completely optional.
Sometimes E2EE makes no sense (e.g. massive public rooms, or clients which don't implement E2EE). Any client that speaks E2EE makes it abundantly clear when a room is encrypted and when it isn't; much like https v. http in a browser.
- the end-to-end encryption is also annoyingly fragile
Not any more; we fixed it over the course of 2024 - see https://2024.matrix.org/documents/talk_slides/LAB4%202024-09... or the recording at https://www.youtube.com/watch?v=FHzh2Y7BABQ. If anyone sees Unable To Decrypt messages these days (at least on Element Web or Element X + Synapse) we need to know about it.
- sometimes these device list updates updates also leak information about your device
Clients send a default device name (e.g. "Element X on iPhone 12 Pro Max") to the server, to help the user tell their own sessions apart, and to give users on the same server some way of debugging encryption problems. Admittedly this is no longer needed (clients typically hide this data anyway), so the API should be cleaned up.
- the spec doesn’t actually define what the canonical json form is strictly
This one is accurate; we need to tighten/replace canonical json, although in practice this only impacts events which deliberately exploit the ambiguities.
- matrix homeservers written in different languages have json interoperability issues
See above.
- [server] signing key expiry is completely arbitrary
Server signing keys are definitely a wart, and we're working on getting rid of them.
- split-brained rooms are actually a common occurrence
Once https://matrix.org/blog/2025/07/security-predisclosure/ lands, things should be significantly improved.
- state resets happen quite a bit more often when servers written in different languages interoperate
See above.
- room admins and moderators have lost their powers over public rooms many times due to state resets
See above.
- you can’t actually force a room to be shut down across the federation
Same as point 2 and 3, you can't force other people's servers to do anything on the Internet (unless we end up in some kind of DRM or remote attestation dystopia)
- moderation relies entirely on the functioning of the event auth system
See above for upcoming state reset fixes.
- media downloads are unauthenticated by default
Not since https://matrix.org/blog/2024/06/26/sunsetting-unauthenticate...
- you can ask someone else’s homeserver to replicate media
Only if you're authenticated on it, as of https://matrix.org/blog/2024/06/26/sunsetting-unauthenticate...
- media uploads are unverified by default
Yes, being an end-to-end encrypted comms system, the server can't scan your uploads given it can't decrypt them, by default. Clients can scan though if they want, although in practice few do.
- you could become liable for hosting copies of illegal media
This is true of any federated system. If you run a mail server, and one of your users subscribes to a malicious mailing list, your mail server will fill up with bad content. Similarly if you run a usenet server. Or a git forge, and someone starts mirroring malicious content.