kevincox

joined 4 years ago
MODERATOR OF
[–] kevincox@lemmy.ml 6 points 1 year ago (3 children)

I do it the simple way. I just stick nginx in front of everything. If I don't want it to be publicly accessible I stick nginx basic auth in front of it.

The advantages is that I can easily access the services from anywhere on any device with just the password. I only need to trust nginx's basic auth to keep me protected, not various different service's authentication.

The downside is that some services don't work great when you have basic auth in the front. This is often due to things like public links or APIs that need to be accessed with other auth.

I just use nginx because I've always used it. I've heard that there are newer reverse proxies that are a bit easier to configure.

[–] kevincox@lemmy.ml 3 points 1 year ago

I would also love to see this but I don't think it is currently supported.

[–] kevincox@lemmy.ml 1 points 1 year ago

I mostly just use Firefox Sync. For critical passwords or non-web passwords and other small keys I store them in pass.

[–] kevincox@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

You seem to be talking about binary search, but this is a search with an unbounded end.

I think the actual optimal thing would be just to take the first commit and bisect from there. But in practice this has downsides:

  1. Bugs that you are tracking down are typically biased towards recent commits.
  2. Older commits may not even be relevant to the bug (ex: before the feature was introduced)
  3. You may have trouble building older commits for various reasons.

So I think "optimal" is sort of fuzzy in this context, but I'm sure you could come up with various models for these variables and come up with something that is optimal for a given model. I haven't got around to doing that yet though.

[–] kevincox@lemmy.ml 7 points 1 year ago* (last edited 1 year ago) (1 children)

My Synapse install is using 94MiB of RAM and 500MiB of database disk space. CPU usage is effectively zero. I only have a 3 active users but decades of conversation history for myself (imported from other services). An uncompressed pg_dump of the data is about 250MiB which is within an order of magnitude of the raw text that I have in it. Nearly all of the conversations are encrypted so it wouldn't compress much.

Given that just running python takes 13MB of RAM it probably isn't using many resources past loading the code. At least at small scale running a Matrix server is not a notable resource burden for most people. A Matrix server written in a more efficient language (like Conduit) would likely be fairly similar to an XMPP server written in the same language. Either way unless you are hosting thousands of users it doesn't seem like this is a major problem for either protocol.

[–] kevincox@lemmy.ml 29 points 1 year ago (4 children)

I don't think you can pick out any one reason. XMPP is very old and has extensions for a huge variety of features. Many people have experience with older versions which had many major missing features (such as strong multi-device with offline support and server-side history) and a lot of the "hype" has died out long ago.

Matrix is new and made a lot of decisions that really helped its popularity.

  1. Having a HTTP-based client-to-server protocol makes web clients very easy to make.
  2. It is based on sync and merging rather than messages which moves some difficult problems (like multidevice and server-side history) into the core protocol meaning that it works well out of the box.
  3. Having HTTP based protocols make hosting it familiar for many people.
  4. The "default" Element clients have lots of features out of the box, features that for a long time were not always present on XMPP servers or clients. This gives a more consistent experience.

We will see what the history holds. Matrix is still very new and maybe the hype will die out and we end up moving back to XMPP. Or maybe something new. Overall I don't think there are major fundamental differences. I think Matrix making graph sync the core primitive to build off of was a good idea, but in practice I don't think it matters much.

You say that XMPP is much lighter. But I think that is mostly due to Synapse not being very efficient. Other implementations are fairly light. Even then my Synapse is using fairly small amounts of resources. You should also check that you are making an apples-to-apples comparison with large rooms, media and message history like you would typically see in a common Matrix server.

[–] kevincox@lemmy.ml 8 points 1 year ago

A blog platform.

[–] kevincox@lemmy.ml 4 points 1 year ago (2 children)

Like actually deletes them from the working copy? Or just removes them in the code sent to the compiler but they still appear in the editor?

[–] kevincox@lemmy.ml 1 points 1 year ago

That's a good point, I worded it poorly. The backing server is provided by you (via your browser). In theory you could run your own or whatever you want. But all traffic is encrypted so it doesn't matter much who runs it.

[–] kevincox@lemmy.ml 2 points 1 year ago (1 children)

Why would spoofing the user-agent help if it is an IP-based block?

[–] kevincox@lemmy.ml 16 points 1 year ago (2 children)

I created my own similar tool: https://filepush.kevincox.ca/

It is optimized for the case where you commonly send files to the same devices. For example I have set up all of my devices as well as my partner's phone and Steam Deck. Then I can just tap them and send the file with end-to-end encryption.

It is sort of cool that there is no backing server, just static files. All of the signalling goes over WebPush.

[–] kevincox@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

You can't really do this on the web as devices can't directly connect to one another. You need some signalling server to bootstrap the transfer. However almost all of these WebRTC services will actually do the transfer locally if both devices are connected to the same network and can talk to one another directly.

So you would need a native application.

view more: ‹ prev next ›