I tried to virtualize / containerize everything, but there's still services I had to run on bare metal. Thoughts? - eviltoast

So in the last iteration of my home/community server (NAS + some common selfhosted services), I tried to virtualize/dockerize everything, and I pretty much succeeded… Except for everything to do with the NAS.

I’ve got a server running Debian, with a couple HDDs with ZFS on them. That server is also running a kvm/qemu hypervisor and Docker, so the vast majority of services are in there, but everything that needs to touch the ZFS pool, I had to set up on the server itself since neither Docker nor VMs can access (or more precisely, share) the hard drives directly. That is, rotating (GFS) backups, SMB shares, SMART reporting, overall monitoring - it’s actually a nontrivial amount of stuff that I don’t have stored anywhere but in the state of the system itself.

It all works fine, but I don’t like how scattered this is. My ideal is that I just have a set amount of places to worry about, e.g. 1) my VMs 2) docker compose files 3) docker volumes, and those three “are” my server. Right now, I’ve got this, and a bunch of hand-written systemd services, some sanoid config, smartd config, smbd config (user/pass, perms, etc)…

I don’t think it makes sense to have a VM that actually does the NASing, since then I’d have to… network mount the share on the host from the guest so that Docker can access it? I imagine there’d be some performance loss too.

I dunno, I didn’t come up with any solution that wouldn’t end up being all twisted and circular in the end, but I don’t think what I’ve got is the best possible solution either. Maybe I’m thinking wrong about this, and I should just set up Ansible so that my main server config is reproducible? Or have two physical machines to begin with?

I’m interested to hear what you think :)