300TB of data. Dropbox and Google are dead to me. Next options. Cloud? Tape? NAS? - eviltoast

So I run a video production company. We have 300TB of archived projects (and growing daily).

Many years ago, our old solution for archiving was simply to dump old projects off onto an external drive, duplicate that, and have one drive at the office, one offsite elsewhere. This was ok, but not ideal. Relatively expensive per TB, and just a shit ton of physical drives.

A few years ago, we had an unlimited Google Drive and 1000/1000 fibre internet. So we moved to a system where we would drop a project onto an external drive, keep that offsite, and have a duplicate of it uploaded to Google Drive. This worked ok until we reached a hidden file number limit on Google Drive. Then they removed the unlimited sizing of Google Drive accounts completely. So that was a dead end.

So then we moved that system to Dropbox a couple of years ago, as they were offering an unlimited account. This was the perfect situation. Dropbox was feature rich, fast, integrated beautifully into finder/explorer and just a great solution all round. It meant it was easy to give clients access to old data directly if they needed, etc. Anyway, as you all know, that gravy train has come to an end recently, and we now have 12 months grace with out storage on there before we have to have this sorted back to another sytem.

Our options seem to be:

  • Go back to our old system of duplicated external drives, with one living offsite. We’d need ~$7500AUD worth of new drives to duplicate what we currently have.
  • Buy a couple of LTO-9 tape drives (2 offices in different cities) and keep one copy on an external drive and one copy on a tape archive. This would be ~$20000AUD of hardware upfront + media costs of ~$2000AUD (assuming we’d get maybe 30TB per tape on the 18TB raw LTO 9 tapes). So more expensive upfront but would maybe pay off eventually?
  • Build a linustechtips style beast of a NAS. Raw drive cost would be similar to the external drives, but would have the advantage of being accessible remotely. Would then need to spend $5000-10000AUD on the actual hardware on top of the drives. Also have the problem of ever growing storage needs. This solution we could potentially not duplicate the data to external drives though and live with RAID as only form of redundancy…
  • Another clour storage service? Anything fast and decent enough that comes at a reasonable cost?

Any advice here would be appreciated!

  • MrB2891@alien.topB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    NAS.

    Over the last 24 months I’ve built 300TB (a mix of 10 and 14TB disks) for $2500 in disks. I could do that right now for $2100. A 18TB LTO9 tape is more expensive than what I’m paying per TB for 14TB disks.

    $700 in hardware to build the NAS with 25 bays.

    Glacier would cost you $1080/mo in storage fees alone (300,000GB @ $0.0036) not including the $0.09/GB to get any data back out. Deep Glacier is less (by half, for storage), but comes with strings attached.

    Don’t forget to factor in labor hours of what it’s going to cost you to maintain a tape library or a local server in general.

    Are you charging clients for long term storage after a project is complete? If not, you should be.

  • Simple-Purpose-899@alien.topB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    AWS Glacier Deep Freeze is designed for this. Something you access a couple of times per year if that, and it’s $.99/TB/mo. Price that out compared to a $10k NAS or tape backup that will still need consumables like drives and tapes, and it might be your best option. There are costs on retrieval, but since as you’ve said this is archive footage that customers might request you could pass that cost down to them.

  • Yugen42@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you need fast and regular access to the archive, anything up to 1 PB can be handled with HDDs nowadays. If you dont need that, LTO tape will be much cheaper. For your offsite backup encryption+archival storage such as GCP coldline or archival storage is very cost effective and can be combined with either. Think about your data and organization. Perhaps you only need fast access to a part of the data, so combining the two might be the best solution. Consider if you have an IT department or a data steward to set up a system for organizing that data.

  • SoCleanSoFresh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t know that I’d take on tape with your use case. There’s a good bit of tech debt involved there.

    NAS (either bought or built) + Amazon glacier or Backblaze for cloud archival backup.

    The NAS (including drives) will probably cost you $7000-8000 USD for 400ish TB of storage with room to grow

    It was easy to give clients access to old data directly if they needed, etc.

    I hope you charge for this. It would help to offset your storage costs.

    • amarino@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      300 TB in Backblaze B2 using their online calculator is $21,600 USD a year. I’m sure you can build / expand a new NAS every year for the similar prices. But then you have to deal with the overhead of managing it and replacing disks.

  • user3872465@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    From what it sounds you want a NAS and Tape Archive.

    So get a device which holds your working Projects, you mentioned arount 20-40TB which is no problem nowdays. Can be done for under 1k with of the shelf stuff.

    And Tape backup for stuff you dont need regularly. Maybe chose an older generation of LTO I would look for something that can hold about 1 Project per Tape or the likes of it. LTO5 is pretty cheap used, ca be had for 500 Bucks but is only 1.5TB per tape.

    Disclaimer, with LTO never look at the compressed NR, its for compressable data only which video is not. Thus with LTO9 you will only get 18TB

    • bobissh@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This.^

      2 small NASs + 2 LTOs (LTO5 may be sufficient for your individual projects, but you also need to backup the NAS, so at least LTO 7 or 8)

    • campster123@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah we’ve got a solid situation for our live projects. Each of us work off 40TB thunderbolt raids with local external drives as our backup and live online backup to Dropbox.

      This is for our archived work, but yeah of that, we access around 20-40TB fairly regualrly. Good to know that tape won’t compress video data at all!

      NAS is sounding more and more like our best bet.

      • user3872465@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Not to be rude or anything, but External RAIDs individual to the user is not really a solid soulution. It may work for 1-2 People working on one project at a time. But it just does not scale. What if someone needs to acces files of that project? they move the raid or plug their laptop on a differen workspace? Not really a great soulution IMO.

        Like you say in the last part having a NAS with maybe a bit of room to grow sso 100TB might be the best option that way everyone can access the data and work accross projects. And more importantly it would offer work from a different place in the office or even work from home.

        Yea with tape the compressed nr are very missleading. Thats a best case scenario where the files compress 2:1 with TAR+gzip which it literallly never does. Bestcase I have seen was 1.2:1 on a folder consisting of config files. Basically nothing nowdays is compressable you will interact with, except textfiles depending on format. So its best to always asume the raw space as the space you get

        • campster123@alien.topOPB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Haha we’ve been this way for 12 years. Certainly not ideal if we scale. But we won’t ever. 4 of us ever needing access. And transferring over the network is not an issue. NAS is too slow for most real time editing. 10gbe is fine but still fairly slow. Those raids will soon be upgraded to SSD raids for each editor. Thanks tho…

  • Ok_Crow_2386@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Have you considered Amazon S3? It’s made for enterprises with unlimited storage, a lot of pricing options and could save you a lot of headaches long term.

    • chili_oil@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      s3 is designed with high availability and high throughput in mind, op needs a cold storage solution like aws glacier or azure cold storage. but even that is not cheap

  • BryceJDearden@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    RAID is not a backup! A single raid array in a single server is still only one copy and one very big single point of failure.

  • vinsan98@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’d recommend the hybrid approach with NAS and Tape Drive

    Build a robust NAS system for remote accessibility, but consider setting up a hierarchical storage management (HSM) system. Frequently accessed or recent projects can reside on the NAS, while older and less accessed ones can be automatically moved to more cost-effective storage.

    Invest in LTO-9 tape drives for archival purposes. While the upfront cost is more but tapes provide long-term, cost-effective storage. This is particularly useful for archival data that doesn’t require frequent access. It adds an extra layer of redundancy and security.

  • jkirkcaldy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I work in a TV production company. Masters and rushes are archived to LTO8.

    Drives are cheap but a real pain to keep around and you can’t keep them indefinitely.

    But you’ll likely want a library. These are expensive. Not necessarily to buy, but to license and get tot software. I think our entire system, (library, 2xlto8 drives, server, software and licenses cost 15-20k)

    And we only licensed 25 out of the 50slots. As it’s a real fucker as you have to license the slots twice, once on the library and again in the archive software.

    But it’s been an absolute godsend, having archive projects available makes life so much easier.

  • sandbagfun1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Raid/NAS, as many others have said, isn’t a backup.

    However, you could have a single NAS and backup to AWS Glacier where storage costs for larger files is cheap going in and getting out in DR scenario is expensive, but maybe covered by your insurance depending on the DR event.

  • bee_ryan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I like doing this math.

    A DS1821+ with (2) DX517 expansion bays would cost 4.1K AUD presuming 10% tax and would be 307 TB presuming (18) 22TB drives with a BTFRS file system running SHR-2 (allows for 2 drive failures).

    (18) 22TB drives @ $22/tb AUD = $9.5K

    So an all in cost for 307TB is 13.6K AUD using that equipment. 27.2K AUD to have a mirrored backup, but it sounds like you’re ready for another 300+ TB right now, so 54.4K AUD to have 1:1 backups and 307TB of runway.

    If AWS Glacier is what you’re comparing to, then you make that up in 6 months.

    Rack mount would be more convenient, as you can have 1PB volumes and a little less cumbersome and tidy setup - the 1821+ with expansion bays are 108TB max per volume, so you’d have to deal with 6 different volumes but maybe not a big deal if your filing system is by year/month. But getting into rack mount with Synology for example would basically double your infrastructure cost. Or you bite the big bullet now on scaleability and use a 60 Bay rack mount @ 29.9K AUD for just 1, but it’s still roughly the same cost per drive bay as the 16 bay.