Selfhosted LLM (ChatGPT) - eviltoast

I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.

Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)

  • CeeBee@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The best/easiest way to get started with a self-hosted LLM is to check out this repo:

    https://github.com/oobabooga/text-generation-webui

    Its goal is to be the Automatic1111 of text generators, and it does a fair job at it.

    A good model that’s said to rival gpt-3.5 is the new Falcon model. The full sized version is too big to run on a single GPU, but the 7b version “only” needs about 16GB.

    https://huggingface.co/tiiuae/falcon-7b

    There’s also the Wizard-uncensored model that is popular.

    https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored

    There are a ton of models out there with new ones popping up every day. You just need to search around. The oobabooga repo has a few models linked in the readme also.

    Edit: there’s also h20gpt, which seems really promising. I’m going to try it out in the next couple days.

    https://github.com/h2oai/h2ogpt

    • laenurd@lemmy.lemist.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Note that when using llama-derived models, such as vicuna, you are bound by their license to only use them for “research” purposes.

      If you want an unrestricted version, go for open-llama or RedPajama.

      Falcon is less restrictive and only wants a cut of profits if they exceed 1 million dollars, but I’d wager that fully unrestricted is the way to go.

      • redcalcium@c.calciumlabs.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        The model creator usually mentioned it in the readme:

        You will need at least 16GB of memory to swiftly run inference with Falcon-7B.

        Usually the models support CPU inference. Tremendously slow but works in a pinch.

      • CeeBee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        There’s an average correlation between the models parameters and the execution precision being used (eg. 7b parameters at f16 precision). And then using optimized execution for 8 bit or even 4 bit will reduce memory usage and increase execution time.

        It’s entirely dependent on the model, the framework, the hardware (CPU vs GPU).

        Generally there should be some indication somewhere in the model’s repo that states what you need.

    • SJ_Zero@lemmy.fbxl.net
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I think I set that up successfully on a vm under windows.

      It’s obviously a level worse than chatgpt but it worked surprisingly well otherwise. Poorer answers but still not bad.

  • ofcourse@kbin.social
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 year ago

    You can absolutely self host LLMs. HELM team has done an excellent job benchmarking the efficiency of different models for specific tasks so that would be a good place to start. You can balance model performance for your specific task with the model’s efficiency - in most situations, larger models are better performing but use more GPUs or are only available via APIs.

    There are currently 3 different approaches to use AI for a custom task and application -

    1. Train a base LLM from scratch - this is like creating your own GPT-by_autopilot model. This would be the maximum level of control, however the amount of compute, time, and data required for training does not make this an ideal approach for the end user. There are many open source base LLMs already published on HuggingFace that can be used instead.

    2. Fine-tune a base LLM - starting with a base LLM, it can be fine tuned for a certain set of tasks. For example, you can fine tune a model to follow instructions or use as a chatbot. InstructGPT and GPT3.5+ are examples of fine tuned models. This approach allows you to create a model that can understand a specific domain or a set of instructions particularly well as compared to the base LLM. However, any time that training a large model is needed, it will be an expensive approach. If you are starting out, I’ll suggest exploring this as a v2 step for improving your model.

    3. Prompt engineering or indexing using an existing LLM - starting with an existing model, create prompts to achieve your objective. This approach gives you the least control over the model itself, but is the most efficient. I would suggest this as the first approach to try. Langchain is the most widely used tool for prompt engineering and supports using self hosted base- or instruct-LLM. If your task is search and retrieval, an embeddings model is used. In this scenario, you generate embeddings for all your content and store the embeddings as vectors. For a user query, you then convert it to an embedding using the same model, and finally retrieve the most similar content based on vector similarity. Langchain provides this capability, but IMO, sentence-transformers may be a better starting point for a self hosted retrieval application. Without any intention to hijack this post, you can check out my project - synology-photos-nlp-search - as an example of a self hosted retrieval application.

    To learn more, I have found the recent deeplearning.ai short courses to be quite good - they are short, comprehensive, and free.

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    I personally use llama.cpp in a VM, however if you have a nvidia GPU with lots of VRAM you’ve got more options available, as well as much faster inference (text generation) speed.

    Check out the community at !localllama@sh.itjust.works, they’re pretty experienced with running LLMs locally

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        At the moment most LLM libraries use CUDA for acceleration, which is a hardware feature on nvidia GPUs

        I believe llama.cpp can make use of AMD GPUs, but double check the project’s GitHub discussions first to confirm this, and see how people set it up

  • kozonak@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Not sure if youre asking about already trained models or you want to train yours.

    If you just want to have fun the small to medium models are pretty ok. Things like Wizard Vicuna 13b or the smaller 7b. You just have to try some of them until you find ehats best for your use case. Ex I have a model running discord bots (with different personalities) but the same model would work badly with my other projects. Esp considering that with some models you can just chat while others need instructions.

    There are also recent models that approach gpt levels. Downside is they are huge in terms of hardware cost (hundreds of gbs of ram, multiple gpus). But they wont necesarly be better than a small more focused model.

    Get oobabooga (the automatic1111 of chat llms) and then search for TheBloke on huggingface for models.

  • CamilleMellom@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    I would advise not training your own model but instead use tools like langchain and chroma, in combination with a open model like gpt4all or falcon :).

    So in general explore langchain!

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    If you want to host a text model thats is reachable by you or anyone securely over the internet, I suggest you turn your pc into a worker for the ai horde. You would then be able to access the model you’re serving from everywhere but also everyone else’s llm and stable diffusion models with priority. You would also be improving the commons

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I can vouch for the horde, it’s addicting to watch your little point counter go up after you’ve put something out there and seeing people use something you are hosting.

      It’s awesome to put a computer out onto the internet and have real life people getting real benefit within minutes. This is a way you can do it, and there’s so much demand that you are helping people by putting your machine out there.

      However, I will give you a fair warning, it will be used for porn. Not entirely, but it will happen.

  • NXTR@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    This project might not be exactly what you’re looking for due to the limited amount of prebuilt models, but this is an interesting project nonetheless. It seems to run on a variety of hardware (even smartphones), however, you’ll need to compile your own models if there isn’t a prebuilt model available. Luckily at least Vicuna is included as a prebuilt model. There’s another model included called RWKV-Raven which is actually an RNN instead of a transformer that approaches its level of performance. Seems pretty interesting.

  • h3ndrik@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    1 year ago

    KoboldCPP works with and without GPU. And is quite easy to install and use. I’d recommend something like that for a beginner.