Black friday is almost upon us and I’m itching to get some good deals on missing hardware for my setup.

My boot drive will also be VM storage and reside on two 1TB NVMe drives in a ZFS mirror. I plan on adding another SATA SSD for data storage. I can’t add more storage right now, as my M90q can’t be expanded easily.

Now, how would I best setup my storage? I have two ideas and could use some guidance. I want some NAS storage for documents, files, videos, backups etc. I also need storage for my VMs, namely Nextcloud and Jellyfin. I don’t want to waste NVMe space, so this would go on the SATA SSD as well.

  1. Pass the SSD to a VM running some NAS OS (OpenMediaVault, TrueNas, simple Samba). I’d then set up different NFS/samba shares for my needs. Jellyfin or Nextcloud would rely on the NFS share for their storage needs. Is that even possible and if so, a good idea? I could easily access all files, if needed. I don’t now if there would be a problem with permissions or diminished read/write speeds, especially since there are a lot of small files on my nextcloud.

  2. I split the SSD, pass one partition to my NAS and the other will be used by Proxmox to store virtual disks for my VMs. This is probably the cleanest, but I can’t easily resize the partitions later.

What do you think? I’d love to hear your thoughts on this!

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.

    TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.

    Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That’s also something I was considering briefly. While I’m waiting for hardware, I did basically that or at least I think I did. Although, I didn’t use a bind mount, because I only have one drive for testing, so I created a virtual disk.

      What exactly do you mean with bind mount? Mount the data set into the container? I didn’t even know, that this was possible. And what is a data set? Sorry, I’m quite new to all this. Thanks!

      • daftwerder@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you create an LXC, then go to Resources --> Add --> Mount point, then you can basically just mount the proxmox drives / folder as a folder within the LXC environment.

      • tvcvt@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        A bind mount kind of shares a directory on the host with the container. To do it, unless something’s changed in the UI that I don’t remember, you have to edit the LXC config file and add something like:

        mp0: /path/on/host,mp=/path/in/container

        I usually make a sharing dataset and use that as the target.

  • asbestos@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Definitely option 2 due to its simplicity and speed gains, but take some time to consider your needs and size the partitions accordingly.

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yeah, that is the hardest part. I don’t exactly now, how much space will be needed for each use case. But in the end, I can just copy all my data somewhere else, delete and resize to accomodate needs.

      • wittless@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I personally created the ZFS zpool within proxmox so I had all the space I could give to any of the containers I needed. Then when you create a container, you add a mount point and select the pool as the source and specify the size you want to start with. Then as your needs grow, you can add space to that mount point within proxmox.

        Say you have a 6 TB zpool and you create a dataset that is allocated 1 TB. Within that container, you will see a mount point with a size of 1 TB, but in proxmox, you will see that you still have 6TB free because that space isn’t used yet. Your containers are basically just quota’ed directories inside the Proxmox hosts’s filesystem when you use zpool. And you are free to go into that container’s settings and add space to that quota as your needs grow.

        • Pete90@feddit.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Ah, very good to know. Then it makes sense to use this approach. Now I only need to figure out, whether I can give my NAS access drives of other VMs, as I might want to download a copy of that data easily. I guess here might be a problem with permissions and file lock, but I’m not sure. I’ll look into this option, thanks!

          • wittless@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            I have two containers pointing to the same bind mount. I just had to manually edit the config files in /etc/pve/lxc so that both pointed to the same dataset. I have not had any issues, but you do have to pay attention to file permissions etc. I have one container that writes and the other is read only for the most part, so I don’t worry about file lock there. I did it this way because if I recall, you can’t do NFS shares within a container without giving it privileged status, and I didn’t want to do that.

            • Pete90@feddit.deOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Excellent, I’ll probably do that then. If I think about it, only one container needs write access so I should be good to go. User/permissions will be the same, since it’s docker and I have one user for it. Awesome!

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That sounds very interesting and I’ll definetly look into it. Thank you!

  • Carunga@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My setup is pretty much option 1, I have no issues with it. You can easly mount NFS shares as docker volumes (I m docking that for jellyfin and nextclould) but you need to get the permissons right. But I am no expert, just a hobbiest not smart enough for a better solution :)

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s good to know, that it works. I will probably play around for a bit once I get my hardware. Thanks for letting me know!

  • Spaz@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I could have sworn I read you shouldn’t use zfs on drives smaller than 2tb. IDK maybe I’m going crazy.

      • Pete90@feddit.deOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m curious. Where is the problem with small drives for RAID5? Too many writes for such a small drive?

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s actually the opposite, with only a single drive of parity, once your hard drive is larger than ~2TB the resilver time for the array is high enough that there’s an uncomfortable chance of an additional drive failure while it’s resilvering

          • Pete90@feddit.deOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            That makes sense, especially when the drives are equally old. Thanks for explaining it!