kevincox

joined 4 years ago
MODERATOR OF
[–] kevincox@lemmy.ml 6 points 1 year ago (9 children)

Instead of system wide PTT per-app you may consider some software that mutes your mic for all apps as PTT, then just leave the mic "active" per-app.

I don't know if a tool that will do this but on my mouse I have configured a mic mute toggle. So I push to start and stop. However technically I don't think there is any restriction to setting up PTT via this mechanism.

[–] kevincox@lemmy.ml 23 points 1 year ago (1 children)

This is my dream. However I think my target market is smaller and less willing to pay (personal rather than business). However maintenance is low effort and I want the product for myself. So even if it doesn't make much or anything I think I will be happy to run it forever.

The ultimate dream would be to make enough to be able to employ someone else part time, so that there could be business continuity if I wasn't able to run it anymore.

[–] kevincox@lemmy.ml 1 points 1 year ago

There is definitely isolation. In theory (if containers worked perfectly as intended) a container can't see any processes from the host, sees different filesystems, possibly a different network interface and basically everything else. There are some things that are shared like CPU, Memory and disk space but these can also be limited by the host.

But yes, in practice the Linux kernel is wildly complex and these interfaces don't work quite as well as intended. You get bugs in permission checks and even memory corruption and code execution vulnerabilities. This results in unintended ways for code to break out of containers.

So in theory the isolation is quite strong, but in practice you shouldn't rely on it for security critical isolation.

[–] kevincox@lemmy.ml 1 points 1 year ago

where you have decent trust in the software you’re running.

I generally say that containers and traditional UNIX users are good enough isolation for "mostly trusted" software. Basically I know that they aren't going to actively try to escalate their privilege but may contain bugs that would cause problems without any isolation.

Of course it always depends on your risk. If you are handing sensitive user data and run lots of different services on the same host you may start to worry about remote code execution vulnerabilities and will be interested in stronger isolation so that a RCE in any one service doesn't allow escalation to access all data being processed by other services on the host.

[–] kevincox@lemmy.ml 2 points 1 year ago

IMHO it doesn't majorly change the equation. Plus in general a single-word comment is not adding much to the discussion. I like Podman and use it over Docker, but in terms of the original question I think my answer would be the same if OP was using Podman.

[–] kevincox@lemmy.ml 1 points 1 year ago

hypervisors get escape vulnerabilities every now and then

Yes, they do. That is why separate hardware is the best solution. But much like going from containers to VMs the extra isolation has costs. But most modern hypervisors are relatively simple and well tested, the security of huge cloud platforms like AWS and GCP are dependant on them. So if I was running a nuclear power plant I absolutely would not trust a VM boundary, but if I am running some shitty home server there are millions of more valuable VMs running in public cloud providers that will likely be attacked first.

is a good security boundary.

"Good" will always depend on your use case. In many cases isolation against bugs and simple malicious behaviour like uploading /etc/shadow somewhere are good enough. In most organizations containers are good enough for running separate applications on the same machine as they are "mostly trusted". In fact for my home server I run lots of applications as different users and I am fine with that level of security.

If I was letting untrusted people upload and run arbitrary code I would definitely not be ok with that level of isolation.

The original question was "if Gossa were to be a virus, would I have been infected?" Good security habit is to assume the worst. If I knew that one container or user on my machine was running malicious code I would absolutely assume the worst by default. I would wipe and re-install that machine unless I had strong reason to know that the malware didn't attempt any privilege escalation.

[–] kevincox@lemmy.ml 16 points 1 year ago (1 children)

To be fair this doesn't sound much different than your average human using the internet.

[–] kevincox@lemmy.ml 4 points 1 year ago

The Linux kernel is less secure for running untrusted software than a VM because most hypervisors have a far smaller attack surface.

how many serious organization destroying vulnerabilities have there been? It is pretty solid.

The CVEs differ? The reasons that most organizations don't get destroyed is that they don't run untrusted software on the same kernels that process their sensitive information.

whatever proprietary software thing you think is best

This is a ridiculous attack. I never suggested anything about proprietary software. Linux's KVM is pretty great.

[–] kevincox@lemmy.ml 5 points 1 year ago (2 children)

I think assuming that you are safe because you aren't aware of any vulnerabilities is bad security practice.

Minimizing your attack surface is critical. Defense in depth is just one way to minimize your attack surface (but a very effective one). Putting your container inside a VM is excellent defense in depth. Putting your container inside a non-root user barely is because you still have one Linux kernel sized hole in your swiss-cheese defence model.

[–] kevincox@lemmy.ml 6 points 1 year ago (9 children)

I never said it was trivial to escape, I just said it wasn't a strong security boundary. Nothing is black and white. Docker isn't going to stop a resourceful attacker but you may not need to worry about attackers who are going to spend >$100k on a 0-day vulnerability.

The Linux kernel isn’t easy to exploit as if it was it wouldn’t be used so heavily in security sensitive environments

If any "security sensitive" environment is relying on Linux kernel isolation I don't think they are taking their sensitivity very seriously. The most security sensitive environments I am aware of doing this are shared hosting providers. Personally I wouldn't rely on them to host anything particularly sensitive. But everyone's risk tolerance is different.

use podman with a dedicated user for sandboxing

This is only every so slightly better. Users have existed in the kernel for a very long time so may be harder to find bugs in but at the end of the day the Linux kernel is just too complex to provide strong isolation.

There isn’t any way to break out of a properly configured docker container right now but if there were it would mean that an attacker has root

I would bet $1k that within 5 years we find out that this is false. Obviously all of the publicly known vulnerabilities have been patched. But more are found all of the time. For hobbyist use this is probably fine, but you should acknowledge the risk. There are almost certainly full kernel-privilege code execution vulnerabilities in the current Linux kernel, and it is very likely that at least one of these is privately known.

[–] kevincox@lemmy.ml 8 points 1 year ago

It is. Privilege escalation vulnerabilities are common. There is basically a 100% chance of unpatched container escapes in the Linux kernel. Some of these are very likely privately known and available for sale. So even if you are fully patched a resourceful attacker will escape the container.

That being said if you are a low-value regular-joe patching regularly, the risk is relatively low.

[–] kevincox@lemmy.ml 14 points 1 year ago (11 children)

Docker (and Linux containers in general) are not a strong security boundary.

The reason is simply that the Linux kernel is far too large and complex of an interface to be vulnerability free. There are regular privilege escalation and container escapes found. There are also frequent Docker-specific container escape vulnerabilities.

If you want strong security boundaries you should use a VM, or even better separate hardware. This is why cloud container services run containers from different clients in different VMs, containers are not good enough to isolate untrusted workloads.

if Gossa were to be a virus, would I have been infected?

I would assume yes. This would require the virus to know an unpatched exploit for Linux or Docker, but these frequently appear. There are likely many for sale right now. If you aren't a high value target and your OS is fully patched then someone probably won't burn an exploit on you, but it is entirely possible.

view more: ‹ prev next ›