Western Digital, SanDisk Extreme SSDs don’t store data safely, lawsuit says - eviltoast
    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      That’s not the only issue. Some flash drives have been found to completely misrepresent their sizes. There was something of an epidemic of them a few years ago, so much so that people started testing their drives after purchase (with tools eg Fight Flash Fraud). You could fill up the drive, then it would just completely fail as it did not actually have the storage capacity advertised.

      Suffice it to say, the data storage industry isn’t without its own brand of shady practices.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Just as a side note for any reader that doesn’t already know it, the computer ones are 2 to the power of a multiple of 10.

      So 1 kilobyte is 210 (which is 1024) bytes, 1 MiB is 220 (1048576) btes and so on.

      So there is actually some logic behind the wierd looking numbers.

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      True, and adding the filesystem also takes off somewhat. That, however, doesn’t explain 15 vs 9 gb

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The next level is that some flash drives reserve some of the space as a hot failover as memory cells die. Some have this separate from the advertised memory capacity, whereas others may report the total memory on the device even if it’s not available for direct use by the user.

        So a double whammy of GB vs GiB and reserve flash memory to keep the drive going as cells die.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.

        If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          That’s part of why I use du on Linux instead of df/ls -l to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.