I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.

Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)

    • redcalcium@c.calciumlabs.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      The model creator usually mentioned it in the readme:

      You will need at least 16GB of memory to swiftly run inference with Falcon-7B.

      Usually the models support CPU inference. Tremendously slow but works in a pinch.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There’s an average correlation between the models parameters and the execution precision being used (eg. 7b parameters at f16 precision). And then using optimized execution for 8 bit or even 4 bit will reduce memory usage and increase execution time.

      It’s entirely dependent on the model, the framework, the hardware (CPU vs GPU).

      Generally there should be some indication somewhere in the model’s repo that states what you need.