• mosiacmango@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    18 hours ago

    Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.

    The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.

    Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      16 hours ago

      Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.

      Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.