This is the only time I hate having decided to self host. - eviltoast

I figured most of you could relate to this.

I was updating my Proxmox servers from 7.4 to 8. First one went without problems. That second one though… Yea, not so much… I THINK it’s GRUB but not sure yet.

Now my Nextcloud, NAS, main reverse proxy and half my DNS went down. And no time to fix it before work. Lovely 🤕 Well I now know what I’ll be doing when I get home.

Out of morbid curiosity, What are some of ya’lls self hosting horror stories.?

  • chiisana@lemmy.chiisana.net
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I’ve been carrying an OMV VM since Proxmox 5. Between one of the major version updates, usrmerge made a mess and forced me to reinstall the boot disk, re-hook everything up, and while not ideal, it works. Updated again recently, and my disks started to fall into read only mode. Tried the usual, rebooting into single user mode, fsck the volume, remounting, etc. and “hey look, it came back online!” only for it to go back into read only mode again. Since it was a virtual disk on a RAID6 array, and nothing else was breaking, it was really boggling my mind. It kept doing that despite still having a couple TB of free space available… or at least so I thought.

    Turns out:

    I had the virtual disk allocated to 19TB of my 24TB available space to work with. The qcow file lazy-write so despite it showing 19TB on disk in ls, it only used as much as the VM actually used. Usage grew to 16TB, the qcow file tried to write more data, but 16TB is the ext4 file size limit on my system. Oops.

    I ended up ordering 3 more drives, expanding to 8x8TB on RAID6 w/ 48TB ish workable space, copied the data out into separate volumes, with none of them exceeding 15TB in size, then finally deleting the old “19TB” volume. Now I have over 25TB of space to grow, and new found appreciation for the 16TB limit :)