• rutrum@lm.paradisus.day
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    What GPU are you using to run it? And what UI are you using to interface with it? (I know of gpt4all and the generic sounding ui-text-generation program or something)

    • mynamesnotrick
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 months ago

      I am using this: https://github.com/oobabooga/text-generation-webui … It is running great with my AMD 7900XT. It also ran great with my 5700xt. It sets up itself within a conda virtual environment so it takes all mess out of getting the packages to work correctly. It can use NVIDIA cards too.

      Once you get it installed you can then get your models from huggingface.co

      I’m on arch, btw. ;)

      Edit: I just went and reinstalled it and saw it supports these gpus

    • pflanzenregal@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Open-webui is the best self hosted LLM chat interface IMO. It works seamlessly with Ollama, but also supports other openAI-API compatible APIs AFAIK.

      I’m using both in combination with each other and both downloading and using models is super easy. Also integrates well with VSCode extension “Continue”, an open source Copilot alternative (setup might require editing the extension’s config file).