What is this? (Its OC!) - eviltoast

List of icons/services suggested:

  • Calibre
  • Jitsi
  • Kiwix
  • Monero (Node)
  • Nextcloud
  • Pihole
  • Ollama (Should at least be able to run tiny-llama 1.1B)
  • Open Media Vault
  • Syncthing
  • VLC Media Player Media Server
  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    Heres a tip, most software has the models default context size set at 512, 2048, or 4092. Part of what makes llama 3.1 so special is that it was trained with 128k context so bump that up to 131072 in the settings so it isnt recalculating context every few minutes…

    Some caveats, this massively increases memory usage (unless you quantize the cache with FA) and it also massively slows down CPU generation once the context gets long.

    TBH you just need to not keep a long chat history unless you need it,.

    • Smokeydope@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      Thank you thats useful to know. In your opinion what context size is the sweet spot for llama 3.1 8B and similar models?

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 months ago

        Oh I got you mixed up with the other commenter, apologies.

        I’m not sure when llama 8b starts to degrade at long context, but I wanna say its well before 128K, and where other “long context” models start to look much more attractive depending on the task. Right now I am testing Amazon’s mistral finetune, and it seems to be much better than Nemo or llama 3.1 out there.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        4 core i7, 16gb RAM and no GPU yet

        Honestly as small as you can manage.

        Again, you will get much better speeds out of “extreme” MoE models like deepseek chat lite: https://huggingface.co/YorkieOH10/DeepSeek-V2-Lite-Chat-Q4_K_M-GGUF/tree/main

        Another thing I’d recommend is running kobold.cpp instead of ollama if you want to get into the nitty gritty of llms. Its more customizable and (ultimately) faster on more hardware.

        • Smokeydope@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Thats good info for low spec laptops. Thanks for the software recommendation. Need to do some more research on the model you suggested. I think you confused me for the other guy though. Im currently working with a six core ryzen 2600 CPU and a RX 580 GPU. edit- no worries we are good it was still great info for the thinkpad users!

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            8GB or 4GB?

            Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.

            llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.

            I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.