Self-hosted home server project - call for competent advisory opinions - eviltoast

Dear readers,

If (TLDR) { ISO: tech stacks advice + FOSS tech and tech educative material advice for self-hosting a basic home lab FT: an opportunity to build your profile’s tech expertise reputation. } else; { I would like to collect opinions about hardware and software stack options.

I would like to build a home server for basic purposes (file storage sync (family, work, movies, music, etc.).

Ideally, I would like to use the same machine for self-hosting: (a) a small lemmy community instance server, (b) a small chat server (e.g. XMPP)

I have accrued decent practice with html, css, javascript, and linux systems administration. For example, my home lab boasts 1 laptop file synced with 1 smartphone, and I have written a few very basic dynamic web apps.

That being said, the vastness, complexity and technicality of the various options seem to me daunting to make sense of, even with some basic clear goals.

Although I expect to have some more research to do, I suspect that someone with more competence than myself may find interest in disbursing a few easy comments of competent advisory opinion to narrow and expedite my research effort as an opportunity of building their own profile’s reputation.

Requirements:

FOSS tech, to the extent that that produces a top security and top quality solution.

Beginner friendly budget.

Early estimates of specs under my consideration: mini-pc, O-droid, normal form factor pc, laptop. ~ 16GB ~ 64 GB RAM. Storage: ideally minimum 5 TB, ideally minimum 3-2-1 rule. OS? FS?

} Sorry about the length of the post, sorry about solicitation of advice.

Thanks for the support.

Sincerely,

LinuxTurtle34

  • killabeezio@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    You will get different answers. Some people like proxmox with ZFS. You can run vms and lxc containers pretty easily. Some people like running everything in a container and using podman or docker. Some people like to raw dog it and just install everything on bare metal ( I don’t recommend this approach though).

    The setup I currently have are 3 servers. One server for compute. This is where I run all my services from. 1 server for storage. 1 server for backup storage.

    The compute server is set up with an NFS share that connects to the storage server. These all have a 10gbe nic on a 10gbe switch.

    If I could go back and redo this setup again, I would make a few changes. I do have a few NVMe drives in my storage server for the NFS share. The compute server has the user home directories on there, as well as the permanent files for the containers that have volumes. This makes it easy for me to backup that data to the other server as well.

    With that said, I kinda wish I went with less storage and built out a server using mostly nvmes. My mobo doesn’t do bifurcation on its x16 slots and so I can only get 1 NVMe per slot. It’s a waste. Nvmes can run somewhat hot, but are smaller and easier to cool than platters. Plus it’s faster to rebuild if something were to happen. You could probably get away with using 1 parity drive because of this.

    I would still need a few big drives for my media, but that data is not as critical to me in the event I lost something there.

    What I would look for in a storage system are the following:

    • mobo with rdimm memory
    • bifurcation pcie slots to add adapter cards for NVMe drives or lots of NVMe slots on the mobo.
    • if doing 10gbe, use sfp+ nics and a sfp+ switch (runs cooler). Then you would just get sfp cables instead of cat6/6e.
    • management port (ipmi)
    • as much memory as you can afford

    With those requirements in mind, something like an ASRock server motherboard using an AMD epyc would normally fit the bill. I have seen bundles go for about 600-700 on AliExpress.

    As far as the OS. I treat the storage server as an appliance. I have truenas on there. This is also the reason I have a separate computer server as it makes it easier for me to manage services the way I want, without trying to hack the truenas box. This makes it easy to replicate to my backup since that is also truenas. I have snapshots every hour and those get backed up. I also have cloud backup for critical data every hour.

    Last, but not least, I have a vps server so I can access my services from the internet. This uses a wireguard tunnel and forwards from the vps to the compute server.

    For the compute server, I am managing mostly everything with saltbox. Which uses ansible and docker containers for most services.

    No matter what you choose, I highly recommend ZFS for your data. Good luck!