I have a used Lenovo Thinkcenter 910 with a i5 7500T running Proxmox with Linux Mint in one VM and Adguard running in another. I’m just getting started so I am reading/searching for tons of answers still

I was hoping to host Jellyfin within Linux Mint. It works pretty well, but I did notice while watching a movie the CPU was pretty well pegged out. I wanted to enable hardware based acceleration but when I started reading setup guides to hope to understand what I was doing, I think I may have painted myself into a corner already.

I think I need to tell Proxmox to pass the hardware acceleration on to Linux, and then get Linux to, but also some of the things I have read make it sound like I need to have set up the VM from the beginning this way.

Am I trying to do this the hard way somehow? Does anyone have any suggestions on the best guide to follow for this?

  • areyouevenreal@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    2-3 clicks? That’s hilarious!

    These are the steps it actually takes: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

    That’s the best case scenario where it actually works without significant issues, which I am told is rarely the case with iGPUs.

    In my case it was considerably more complicated as I have two GPUs from Nvidia (one used for host display outout), so I needed to block specific IDs rather than whole kernel modules.

    Plus you lose display access to the Proxmox server which is important if anything goes wrong. You can also only passthrough to one VM at a time. Compared to using LXC you can passthrough to almost unlimited containers, and still have display output for the host system. It almost never makes sense to use PCIe passthrough on an iGPU.

    The reason to do passthrough is for gaming on Windows VMs. Another reason is because Nvidia support on Proxmox is poor.

    This is a guide to do passthrough with LXC: https://blog.kye.dev/proxmox-gpu-passthrough

    It’s actually a bit less complicated for privileged LXC, as they are having to work around the restrictions of unprivileged LXC containers.

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Its always worked well for me. I passthough my dedicated graphics and USB controller to a Pop os VM and then the integrated graphics to the Jellyfin VM. I initially had to enable virtualization extensions and for the dedicated graphics there was a bit more setup but for the most part it is reasonable.

      • areyouevenreal@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        My point is it’s not actually much (or potentially any) simpler to use PCIe passthrough than using an LXC. Yet it comes with more resource usage and more restrictions. Some hardware is more difficult to pass through, especially with iGPUs. I don’t even think all iGPUs even use PCIe.

        • Possibly linux
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          iGPUs are incredibly easy to pass though and are PCIe devices.

          • areyouevenreal@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Not all of them. Have a look at a Raspberry Pi or Apple Silicon devices. In fact most ARM SoCs I am fairly sure don’t use PCIe for their iGPUs. This makes sense when you think about the unified memory architecture some of these devices use. Just in case you aren’t aware Proxmox does indeed run on a raspberry pi, and I am sure they will gain support for more ARM devices in the future. Though I believe an x86 device with unified memory could also have problems here.

            • Possibly linux
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              If it wasn’t connected via PCIe how would it talk to the GPU. Anyway Proxmox does in fact not officially support ARM so that is a pretty miniscule use case. I’m not even sure why you would want Proxmox on a low powered device.

              For me PCIe pass though is the easiest. Virtualization adds little overhead in terms of raw performance so it isn’t a big deal. If you prefer LXC that’s fine but my initial statement was based on my own experiences.

              • areyouevenreal@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                10 months ago

                AMBA/AXI-bus in the case of the Pi. GPUs existed long before PCIe did lol.

                One some x86 systems the CPU and GPU aren’t connected with PCIe either. AMD has infinity fabric that they use for things like the Instinct MI300 and some of their other APUs

                Edit: Oh yeah also ARM isn’t just low power anymore. It’s used in data centers and super computers these days. Even if it was there is lots of stuff you can do with a low power node, including running file servers, DNS or Pi hole, web servers, torrent/usenet downloaders, image and music servers, etc. I have also seen them used to maintain cluster quorum after loss of one more powerful node. A two node cluster won’t have quorum if one fails, so adding a pi as a third node makes sense.