• Possibly linux
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    How and with what card? I have a XFX RX590 and I just gave up on acceleration as it was slow even after I initially set it up.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      11 months ago

      I use an 6900 XT and run llama.cpp and ComfyUI inside of Docker containers. I don’t think the RX590 is officially supported by ROCm, there’s an environment variable you can set to enable support for unsupported GPUs but I’m not sure how well it works.

      AMD provides the handy rocm/dev-ubuntu-22.04:5.7-complete image which is absolutely massive in size but comes with everything needed to run ROCm without dependency hell on the host. I just build a llama.cpp and ComfyUI container on top of that and run it.