• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      It’s a way to run models on your local machine and provide an API that’s compatible with OpenAI that can be used by apps that normally rely on that.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          3
          ·
          2 hours ago

          Privacy and ability to generate content you want. Using commercial services like OpenAI means your data is sent to their servers, so anything you query is known to the company, and their models are often restricted in terms of content they will allow you to generate. For example, Google’s Gemini will refuse to deal with many political subjects.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          Right, you can download any publicly available model and run it without using the internet. Caveat is that you do need a relatively fast machine to make it performant.

          • FuckBigTech347@lemmygrad.ml
            link
            fedilink
            arrow-up
            1
            ·
            14 minutes ago

            For reference the oldest card I have that Vulkan supports is an RX 560 that I bought in 2017 (I’m on GNU/Linux w/ amdgpu and the RADV mesa driver aka. “The Default”). Most medium models on it run at around 6 - 10 Tokens/s. Some crawl to below 6 Tokens/s though and become slower the longer the answer they output is, probably because parts of the model is in RAM since that card has “only” 4GB of VRAM. Models that fully fit in VRAM are a lot faster.