What would be the cheapest and most cost-effeciant way of self hosting LLMs - eviltoast

I’ve a minipc running an AMD 5700U where I host some services, including ollama and openwebui.

Unfortunately the support of rocm isn’t quite there yet and not to mention that of mobile GPUs.

Surprisingly the prompts work when configured to use the CPU, but the speed is just… well, not good.

So, what’d be a cheap and energy efficient setup to run sone kind of LLM for personal use, but still get decent speed?

I was thinking about getting an e-gpu case, but I’m not sure about how solid this would end up.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Just a noob question: any advantage of doing this (except privacy) of using thar setup instead of using chatgpt4 from openai website?

    • passepartout@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.

      Also, i don’t trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.