I have a used Lenovo Thinkcenter 910 with a i5 7500T running Proxmox with Linux Mint in one VM and Adguard running in another. I’m just getting started so I am reading/searching for tons of answers still

I was hoping to host Jellyfin within Linux Mint. It works pretty well, but I did notice while watching a movie the CPU was pretty well pegged out. I wanted to enable hardware based acceleration but when I started reading setup guides to hope to understand what I was doing, I think I may have painted myself into a corner already.

I think I need to tell Proxmox to pass the hardware acceleration on to Linux, and then get Linux to, but also some of the things I have read make it sound like I need to have set up the VM from the beginning this way.

Am I trying to do this the hard way somehow? Does anyone have any suggestions on the best guide to follow for this?

  • Possibly linux
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 months ago

    Don’t do that. Run Jellyfin in its own VM with a GPU passed though via pcie passthough.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      That’s going to be almost impossible to do with an iGPU. Makes way more sense to pass through to LXC.

      • Possibly linux
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        It takes about 2-3 clicks. What do you mean impossible? LXC is likely faster but it takes more setup.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          2-3 clicks? That’s hilarious!

          These are the steps it actually takes: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

          That’s the best case scenario where it actually works without significant issues, which I am told is rarely the case with iGPUs.

          In my case it was considerably more complicated as I have two GPUs from Nvidia (one used for host display outout), so I needed to block specific IDs rather than whole kernel modules.

          Plus you lose display access to the Proxmox server which is important if anything goes wrong. You can also only passthrough to one VM at a time. Compared to using LXC you can passthrough to almost unlimited containers, and still have display output for the host system. It almost never makes sense to use PCIe passthrough on an iGPU.

          The reason to do passthrough is for gaming on Windows VMs. Another reason is because Nvidia support on Proxmox is poor.

          This is a guide to do passthrough with LXC: https://blog.kye.dev/proxmox-gpu-passthrough

          It’s actually a bit less complicated for privileged LXC, as they are having to work around the restrictions of unprivileged LXC containers.

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Its always worked well for me. I passthough my dedicated graphics and USB controller to a Pop os VM and then the integrated graphics to the Jellyfin VM. I initially had to enable virtualization extensions and for the dedicated graphics there was a bit more setup but for the most part it is reasonable.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              My point is it’s not actually much (or potentially any) simpler to use PCIe passthrough than using an LXC. Yet it comes with more resource usage and more restrictions. Some hardware is more difficult to pass through, especially with iGPUs. I don’t even think all iGPUs even use PCIe.

              • Possibly linux
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                iGPUs are incredibly easy to pass though and are PCIe devices.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  Not all of them. Have a look at a Raspberry Pi or Apple Silicon devices. In fact most ARM SoCs I am fairly sure don’t use PCIe for their iGPUs. This makes sense when you think about the unified memory architecture some of these devices use. Just in case you aren’t aware Proxmox does indeed run on a raspberry pi, and I am sure they will gain support for more ARM devices in the future. Though I believe an x86 device with unified memory could also have problems here.

                  • Possibly linux
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    10 months ago

                    If it wasn’t connected via PCIe how would it talk to the GPU. Anyway Proxmox does in fact not officially support ARM so that is a pretty miniscule use case. I’m not even sure why you would want Proxmox on a low powered device.

                    For me PCIe pass though is the easiest. Virtualization adds little overhead in terms of raw performance so it isn’t a big deal. If you prefer LXC that’s fine but my initial statement was based on my own experiences.