thecoffeehobbit

joined 5 months ago
[–] thecoffeehobbit@sopuli.xyz 21 points 1 week ago (5 children)

This meme is about old enough to drink at this point.

[–] thecoffeehobbit@sopuli.xyz 2 points 2 weeks ago (1 children)

Thanks!

For the record, which DE do you run on it? KDE?

[–] thecoffeehobbit@sopuli.xyz 1 points 2 weeks ago

Thanks!

Regarding gaming, I had big trouble trying to play some steam games on gnome. After switching to KDE stuff just works.

Also you can't (feasibly) run Hyprland on Debian stable, nor can you (easily) run GNOME on MX Linux, etc. So there are a few points where distro choice does have an effect. But I think I got the point across enough with the question

 

Hi, Long time Mac user here, recently switching my personal devices to Linux. My work unfortunately does not support this, mandating work be done on the provisioned device and it has to be Mac or Windows. So, I'm finding it a bit hard to get up to speed when coding on Linux. I've tried GNOME, KDE, Hyprland and find no obvious heaven in any of them. I have two external 27" monitors fwiw. My personal PC has Arch and KDE for gaming reasons, but I'm also looking to code more on open source tools to avoid personal vendor lock-ins.

In other companies I've visited I've seen varied policies, one runs stock Ubuntu, one mandates Fedora with user choice for DE/WM, many use Macs but allow for Linux if desired. So, I'd want to run a small survey. Keeping in mind all the aspects of using a device at varied software work, so coding, email, chat, managing servers, having online meetings, sharing screens, making presentations: if you use Linux for work,

What DE or WM (and distro if relevant) do you use for your actual, professional work?

Was this a choice by you or pre-selected by the employer? Do they allow you to work on your own device if desired? (Excluding freelancers obv.)

Do you need to balance stability vs. customisability? Or is that a no-brainer for you? (="Have you ever had to cancel a meeting because an Arch update broke your screen sharing?")

How much time do you find reasonable to put into maintaining/developing your setup?

Did distro choice (or lack thereof) impact your choices for DE/WM?

Do you feel like your code editor, language stack, or job profile has an impact on the choices? For example, is your profile very specific ("I go to dailies and turn tickets into code / I work alone for weeks at a time researching stuff"), allowing you to optimise the setup further?

Anything else you'd want to highlight about this?

Edit: Takeaways so far

  • Immutable setups ftw
  • Arch is stable enough though
  • Type of work affects distro choice more so than DE choice (I do backend webdev, my deliverables are very platform independent, so I didn't think about this much)
  • Plenty of XFCE users out there!
  • Zero mentions for Hyprland!
[–] thecoffeehobbit@sopuli.xyz 3 points 1 month ago

Can confirm, arch runs fine on my 2014 macbook pro too. Does definitely require some adjusting to get there, but if you wanna use arch that's a given anyway. Gnome desktop has decent multi touch support for the trackpad out of the box IIRC.

[–] thecoffeehobbit@sopuli.xyz 2 points 1 month ago

Obviously supporting the important work here, just couldn't resist

[–] thecoffeehobbit@sopuli.xyz 13 points 1 month ago (5 children)

Gastrointestinal rights hotline?

(Yes I can infer what it's about but as non-American I have zero idea what it concretely stands for..)

[–] thecoffeehobbit@sopuli.xyz 4 points 1 month ago

I'd be very interested in these polls if you have some to link!

[–] thecoffeehobbit@sopuli.xyz 2 points 2 months ago

I have an external storage unit a couple kilometers away and two 8TB hard drives with luks+btrfs. One of them is always in the box and after taking backups, when I feel like it, I detach the drive and bike to the box to switch. I'm currently researching btrbk for updating the backup drive on my pc automatically, it's pretty manual atm. For most scenarios the automatic btrfs snapshots on my main disks are going to be enough anyway.

[–] thecoffeehobbit@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago)

As for smart home control, HA is the standard. No-brainer. For mobile OS, you can buy Fairphones with /e/OS pre-installed, a fork of LineageOS. There are some tradeoffs, but it's generally usable, though not as secure as stock Android as it gets the security patches with a delayed schedule.

[–] thecoffeehobbit@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago)

Oh yeah and I did enable Proxmox VM firewall for the TrueNAS, the NFS traffic goes via an internal interface. Wasn't entirely convinced by NFS's security posture when reading about it.. At least restrict it to the physical machine 0_0 So I now need to intentionally pass a new NIC to any VM that will access the data, which is neat.

[–] thecoffeehobbit@sopuli.xyz 1 points 3 months ago (1 children)

A wrap-up of what I ended up doing:

  • Replaced the bare metal Ubuntu with Proxmox. Cool cool. It can do the same stuff but easier / comes with a lot of hints for best practices. Guess I'm a datacenter admin now
  • Wiped the 2x960GB SSD pool and re-created it with ZFS native encryption
  • Made a TrueNAS Scale VM, passed through the SSD pool disks, shared the datasets with NFS and made snapshot policies
  • Mounted the NFS on the Ubuntu VM running my data related services and moved the docker bind mounts to that folder
  • Bought a 1Gbps Intel network card to use instead of the onboard Realtek and maxed out the host memory to 16GB for good measure

I have achieved:

  • 15min RPO for my data (as it sits on the NFS mount, which is auto-snapshotted in TrueNAS)
  • Encryption at rest (ZFS native)

I have not achieved (yet..):

  • Key fetch on boot. Now if the host machine boots I have to log in to TrueNAS to key in the ZFS passphrase. I will have to make some custom script for this anyway I guess to make it adapt to the situation as key fetching on boot is a paid feature in TrueNAS but it just makes managing the storage a bit easier so I wanna use it now. Disabled auto start on boot for the services VM that depends on the NFS share, so I'll just go kick it up manually after unlocking the pool in TrueNAS.

Quite happy with the setup so far. Looking to automate actual backups next, but this is starting to take shape. Building the confidence to use this for my actual phone backups, among other things.

 

Hi Lemmy! First post, apologies if it's not coherent :)

I have a physical home server for hosting some essential personal cloud services like smart home, phone backups, file sharing, kanban, and so. I'm looking to re-install the platform as there are some shortcomings in the first build. I loosely followed the FUTO wiki so you may recognise some of the patterns from there.

For running this thing I have a mini-pc with 3 disks, 240GB and 2x 960GB SSDs. This is at capacity, though the chassis and motherboard would in theory fit a fourth disk with some creativity, which I’m interested to make happen at some point. I also have a Raspberry Pi in the house and a separate OPNsense box for firewall/dns blocking/VPN etc that works fine as-is.

In the current setup, I have Ubuntu Server on the 240GB disk with ext4, which hosts the services in a few VMs with QEMU and does daily snapshots of the qcow2 images onto the 960GB SSDs which are set up as a mirrored zfs pool with frequent automatic snapshots. I copy the zpool contents periodically to an external disk for offsite backup. There’s also a simple samba share set up on the pool which I thought to use for syncthing and file sharing somehow. This is basically where I’m stopping to think now if what I’m doing makes sense.

Problems I have with this:

  • When the 240GB disk eventually breaks (and I got it second hand so it might be whatever), I might lose up to one day of data within the services such as vikunja, since their data is located on the VMs, which are qcow2 files on the server’s boot drive and only backed up daily during the night because it requires VM shutdown. This is not okay, I want RPO of max 1 hour for the data.
  • The data is currently not encrypted at rest. The threat model here is data privacy in case of theft.

Some additional design pointers:

  • Should be able to reboot remotely in good weather.
  • I want to avoid any unreliable or “stupid” configurations and not have insane wear on my SSDs.
  • But I do want the shiny snapshotting and data integrity features of modern filesystems for especially my phone’s photo feed.
  • I wish to avoid btrfs as I have already committed to zfs elsewhere in the ecosystem.
  • I may want to extend the storage capacity later with mirrored HDD bulk storage.
  • I don’t want to use QEMU snapshots for reaching the RPO as it seems to require guest shutdown/hibernation to be reliable and just generally isn’t made for that. I’m really trying to make use of zfs snapshots like I already do on my desktop.

My current thoughts revolve around the following - comments most welcome.

  • Ditch the 240GB SSD from the system to make space for a pair of HDDs later. So, the 960GB pair would have both boot and data, somehow. (I'm open to having a separate NAS later if this is just not a good idea)
  • ZFS mirror w/ zfs-auto-snapshot + ZVOLs + ext4 guests? Does this hurt the SSDs?
  • Or: ext4 mdadm raid1 + qcow2 guests running zfs w/ zfs-auto-snapshot? Does this make any sense at all?
  • ZFS mirror + qcow2 + ext4 guests? This destroys the SSDs, no?
  • In any case, native encryption or LUKS?
  • Possibly no FDE, but dataset level encryption instead if that makes it easier?
  • I plan to set up unattended reboots with the Pi as key server running something like Mandos. Passphrase would be required to boot the server only if the Pi goes down as well. So, any solution must support using a key server to boot.
  • What FS should the external backup drives have? I'm currently leaning into ZFS single disk pools. Ideally they should be readable with a mac or windows machine.
  • Does Proxmox make things any easier compared to Ubuntu? How?
  • I do need at least one VM for home assistant in any case. The rest could pretty much all run in containers though. Should I look into this more or keep the VM layer?

I'm not afraid to do some initially complex setting up. I'm a full stack web developer, not a professional sysadmin though, so advice is welcome. I don’t want to buy tons of new shit, but I’m not severely budget limited either. I’m the only admin for this system but not the only user (family setting).

What’s the 2025 way of doing this? I’m most of all looking any inspiration as to the “why”, I can figure out ways to get it done if I see the benefits.

tldr: how to best have reliable super-frequent snapshots of a home server’s data with encryption, preferably making use of zfs.

view more: next ›