Edge WebView2
I'm like 90% sure this requires edge to be installed, even though the EU mandated that they make edge uninstallable. So that might be their game here.
Edge WebView2
I'm like 90% sure this requires edge to be installed, even though the EU mandated that they make edge uninstallable. So that might be their game here.
The person telling you to "learn what AD is" is kinda a douche, but they aren't wrong.
AD is mainly 3 components in one:
All of these are doable on Linux. In many ways. Many, many ways. That you have to set up yourself.
For configuration management, do you want ansible, puppet, chef, nix, etc?
For shared logins, do you want openldap, lldap, Red Hat's ldap, etc?
For shared user data, do you want nfs, systemd-homed, or something else?
And for all of those, you have to evaluate, maybe test, and then select a solution, and then set it up yourself in a resilient manner.
Nixos, as a server distro, can host the relevant services needed for this. As a desktop distro, it can also do configuration management. But that's missing the point of AD, in my opinion.
The point of AD, and how it managed to become so popular, is that it is all of those, in an all-in-one solution that is simple to use (joining Windows machines to a domain is trivial), and it also comes with paid support.
Even if you were to build your own alternative on Nixos, which would be a lot of tinkering and twiddling, then you would end up with some of the same core features, but you would have to maintain, secure, etc, it yourself, and not having to do those to such an extent is why people buy Active Directory. There would be no alternative to things like Group Policy, instead you would be writing your own nix code.
So yeah. Unless someone comes along and builds an all-in-one solution on top of Nixos, nixos isn't really an alternative to active directory. You can replicate the core features. But it's not an alternative.
I remember this being brought up with an acquaintance, but basically there's a bug where the newest fedora kernel isn't compatible with VMWare.
So yeah. Either wait for a kernel patch, or wait for VMWare to fix their stuff. But they might not, other users have mentioned that they've gone downhill after being bought by Broadcom.
If you want 3d acceleration on virtualized Linux guests, other than vmware, you have two options:
The latter is basically only going to work on a Linux host, virtualizing Linux guests (although it is possible on windows, with caveats).
The other downside is that no matter which option you pick, it's all going to end up being a bit more tinkering (either a little — assign a vm a gpu, or a lot, install unsigned windows drivers), compared to VMWare's "just works"/one click 3d acceleration setup.
Dockers manipulation of nftables is pretty well defined in their documentation
Documentation people don't read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.
As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.
Too bad people don't read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver's webtop, because it's an example of the two of the worst footguns in docker in one
To include the rest of my comment that I linked to:
Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?
No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.
On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.
You originally stated:
I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.
And I'm trying to say that even if that was true, it would still be better than a footgun where people expose stuff that's not supposed to be exposed.
But that isn't the case for podman. A quick look through the github issues for podman, and I don't see it inundated with newbies asking "how to expose services?" because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.
(I don't have anything against you, I just really hate the way docker does things.)
Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber
, then it's only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.
An easy way to see containers running is: docker ps
, where you can look at forwarded ports.
Alternatively, you can use the nmap
tool to scan your own server for exposed ports. nmap -A serverip
does the slowest, but most indepth scan.
Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.
I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.
My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn't be exposed.
Excerpt from another comment of mine:
It’s only docker where you have to deal with something like this:
***
services:
webtop:
image: lscr.io/linuxserver/webtop:latest
container_name: webtop
security_opt:
- seccomp:unconfined #optional
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SUBFOLDER=/ #optional
- TITLE=Webtop #optional
volumes:
- /path/to/data:/config
- /var/run/docker.sock:/var/run/docker.sock #optional
ports:
- 3000:3000
- 3001:3001
restart: unless-stopped
Originally from here, edited for brevity.
Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.
If you need public access:
https://github.com/anderspitman/awesome-tunneling
From this list, I use rathole. One rathole container runs on my vps, and another runs on my home server, and it exposes my reverse proxy (caddy), to the public.
No. Windows will only replace the removable media path at \EFI\boot\bootx64.efi
, of the bootloader. If grub is stored somewhere else, windows won't replace it.
https://wiki.debian.org/UEFI#Force_grub-efi_installation_to_the_removable_media_path
However, not every motherboard is compliant with the UEFI spec, and supports booting from other EFI binaries than \EFI\boot\bootx64.efi
. My motherboard was one such board, where I had to force grub to install to the removable media path (which isn't the default on debian, although it is the default on a lot of other distros).
@Quills@sh.itjust.works , you should test if your motherboard properly implements the UEFI specification, by going into the UEFI menu, and selecting a different file to boot from, or changing defaults. If you look and there is no such option, or the option is ignored, then you know your motherboard isn't properly implementing the UEFI spec.
You can test if your motherboard supports booting from a different file by downloading an abitrary efi file (like memtest), and then placing it in the EFI system partition, at somewhere other than the removable media path. If you can get the UEFI to boot from somewhere other than that, then the UEFI spec is properly implemented, and Windows updates won't overwrite grub.
Of course, a simpler way to test is to simply install debian and see if it boots. If it does, then windows won't overwrite grub. If not then it will. You can then install a different distro from there.
https://nixlang.wiki/en/tricks/distrobox
Not the nix way, but when you really need something to work, you can create containers of other distros.
From what I've heard, true multiseat is very to configure. You probably also want to investigate using GPU accelerated containers, because it's legitimately easier to share the same GPU across multiple containers as opposed to multiple seats.
Wezterm. I started out on konsole, and was happy with it, but then I started using zellij as my terminal multiplexer. Although zellij allows you to configure what command copies and pastes text, copy/paste on wayland and windows only works by default with wezterm. It gives me consistency across multiple DEs/OSes, with minimal configuration, which is good because I was setting up development environments for many people, with many configurations
Also switched here. OBS on wayland has some new features, that I'm excited to take advantage of, but I still cannot find a way to share some windows, but not an entire monitor.
OBS has another feature: "virtual monitor". It does what it sounds like, and creates a virtual monitor, which you can then treat like a real monitor, like extending to, or unifying outputs, etc.
It also has a feature to share the entire workspace, but it doesn't work like I expect, and instead uses all monitors (not workspaces) as a single input source. I suspect that's a bug tbh, because this behavior is useless considering you can just add monitors as a source side by side.