koboldcpp Runpod Template Sep 2023

Read https://rentry.co/uvyqd for general introduction to Runpod. As for this setup, it is as simple as picking the GPU and then let the template bake for you.

Tested to work with an A6000 as of 11/09/2023. This template uses base llama 2 70B by default. Hopefully this will streamline the setup process for (You).

By the nature of public templates, I get runpod credits off people's runtime on it. If you don't feel like using the template, you may copy every config over and make your own template, it should still work.


How to

Link to the template:
https://runpod.io/gsc?template=j1gkrd9xwf&ref=7r5qq208

  1. Since it's 70B, pick an A6000 (Previous Gen) or better to start a pod. I recommend either Secure Cloud (Spot) or Community Cloud (On-demand)
  2. (Optional): In Customize Deployment -> Click on Environment Variables to expand. You may adjust settings like banning tokens banned_tokens, context length ctx:(4096/6144/8192) or paste a download link to an alternative model model (see Alternative Models). Adjust Volume Disk size for models larger than 70b q4_K_M
  3. Run the template and wait for the model to download in the background. The disk usage is a good indicator of progress - it should be at 97% when it finishes. If your download got interrupted, you can just start the pod again, and your progress will be preserved. It should take about 15-20 minutes in total. Delete the pod and start over if it didn't work out
  4. AI should be ready when once you see GPU memory getting filled up after the download is done. Click Connect -> HTTP Service [Port 7860] to access the web interface.
  5. Once you're in Kobold Lite, go to Settings -> Click on the textbox next to Max Tokens and type 8192, or 4096 or 6144 if you changed ctx
  6. Adjust other settings, load your prompts from club with Scenarios etc.
  7. Enjoy! Note that unlike exllama, llama.cpp first takes up to about a minute to process an unseen prompt. Your first generation will come through after that.
    Learn more: koboldcpp Wiki -> What is Smart Context?
  8. Stop and terminate the pod once you're done

Alternative models

that you may try:

Chronos-70B v2

https://huggingface.co/TheBloke/Chronos-70B-v2-GGUF/resolve/main/chronos-70b-v2.Q4_K_M.gguf

Airoboros L2 70B 2.1 Creative

https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGML/resolve/main/airoboros-l2-70b-2.1-creative.ggmlv3.Q4_K_M.bin


FAQ

Why koboldcpp and not exllama?

Because I run a similar setup with 6 bit quant, which doesn't exist in exllama. Generation quality outweighs everything for me.

Why would I ever want less than 8k context? What about 12k or 16k?

Llama 2 was natively train on 4k context, context extension techniques will introduce some quality loss. If you have short stories that won't make use of 8k anyway, consider using 4k or 6k if you're being paranoid like me.

I haven't tested 12k or 16k because I don't have stories that long yet. You may try if you want to, by adding and setting rope_freq_base in Environment Variable and changing ctx to 12288 or 16384 etc.

Edit Report
Pub: 11 Sep 2023 05:13 UTC
Edit: 14 Sep 2023 01:33 UTC
Views: 217