Transfer speed issues on new Proxomox 8.3 setup - eviltoast

I recently posted about upgrading my media server and migrating off Windows to Proxmox. I’ve been following an excellent guide from TechHut on YouTube but have run into issues migrating my media into the new Proxmox setup.

Both my old Windows machine and new Proxmox host have 2.5Gb NIC cards and are connected together with a 2.5Gb switch and running on the same subnet. Following the guide, I’ve created a ZFS pool with 7x14TB drives and created an Ubuntu LXC which is running Cockpit to create Samba shares.

When transferring files from Windows, I’m only seeing 100MB/s speeds on the initial transfer and every other transfer after that caps out at >10MB/s until I reboot the Cockpit container and the cycle completes.

I’m not very knowledgeable on Proxmox or Linux but have run an iperf3 test between Windows > Proxmox and Windows > Cockpit container and both show roughly 2.5Gb transfer speeds yet I am still limited when transferring files.

Googling the issue brings up some troubleshooting steps but I don’t understand a lot of it. One fix was to disable IPv6 in Proxmox (I dont have this setup on my network), which was successful, but didn’t fix anything. I no longer see the interface when doing an ‘ip a’ command in Proxmox, though I do still see it when doing it in the SMB container.

Does anybody have any experience with this that can offer a solution or path toward finding a solution? I have roughly 40TB of media to transfer and 8MB/s isn’t going to cut it.

  • tenchiken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 hours ago

    What drives? If they are shingled, your performance will be terrible and the array runs a high risk of failing.

    CMR is the way to go.

    SMR behavior is about like what you describe… Fast until the drive cache is filled then plummets to nothing.

    • CmdrShepard42@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      5 are WD HC530 datacenter drives and two are the 14TB EZAZ from Easystores. I don’t think any of the larger WD drives are SMR but I don’t have a definitive answer.

      • tenchiken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Hmm, at a glance those all look to be CMR.

        To rule this out ideally, a tool like iostat (part of sysstat tools) can help. While moving data, and with the problem happening, if you run something like “iostat 1 -mx” and watch for a bit, you might be able to find an outlier or see evidence of if the drives are overloaded or of data is queueing up etc.

        Notably watch the %util on the right side.

        https://www.golinuxcloud.com/iostat-command-in-linux/ can help here a bit.

        The %util is how busy the communication to the drive is… if maxed out, but the written per second is junk, then you may have a single bad disk. If many are doing it, you may have a design issue.

        If %util doesn’t stay pegged, and you just see small bursts, then you know the disks are NOT the issue and can then focus on more complex diagnosis with networking etc.