Code Llama 70b on text-generation-webui? - eviltoast

I was wondering if anyone here got Code Llama 70b running or knows of any guides/tutorials on how to do so. I tried setting it up myself with a quantized version, and it was able to load but I think I must have misconfigured it since I only got nonsensical results. One thing I definitely don’t understand is the templates, did they change those? Also, if this type of post isn’t allowed or is off topic please let me know, I have never posted in this sublemmy before.

  • noneabove1182@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    If you’re using text generation webui there’s a bug where if your max new tokens is equal to your prompt truncation length it will remove all input and therefore just generate nonsense since there’s no prompt

    Reduce your max new tokens and your prompt should actually get passed to the backend. This is more noticable in models with only 4k context (since a lot of people default max new tokens to 4k)