Barebones Local/aids/Guide - May 3rd Edition
(Disclaimer: I'm a retarded techlet with a 3060 and a 90's kid's knowledge of computers.
That being said, if I can get this shit running, then you'll have no excuse!)
Introduction
I found our current rentry link hard to follow for a total newbie. (For example, it links a fucking 8x7B in the tl;dr—seriously?!) There are lots of options and recommendations, but not much information on actually getting a specific setup working. I'm going to try and fill that niche by recommending a specific program, specific models, and a specific set of sampler settings that you can mindlessly throw together and get something out of. I'm not saying it's the best setup, just the one I found easy to get going right away.
I'm running a GeForce RTX 3060 with 12 gigs of VRAM and 16 gigs of DDR4. I think that's a fairly typical setup for a budget gaming pc. If you want to see how the models run on your computer, use this tool: LLM Model VRAM Calculator
This guide assumes you're running an Nvidia card and Windows, but you can probably tweak it to work for AMD/Linux/Mac. If you have no idea what you're doing, I hope this guide helps. If you're more knowledgeable than I am, I encourage you to PLEASE write a better guide and depreciate mine.
Cool? Cool.
Downloads
Program:
KoboldCpp.exe - There are other programs like llama.cpp and oobabooga, but I haven't downloaded that shit so I have no idea how to use it. This is a simple exe that you can just double-click and use.
Note: I downloaded the default one for Windows (with CUDA), version 1.64.
Models:
Go to the Files and Versions tab and download whichever .gguf best fits in your GPU. You only need 1 file. VRAM Calculator: (linked again.)
Fimbulvetr - This is the one the other two are (at least partially) built off. It's good at roleplay and storytelling, but not as eager to get lewd.
Moistral - This is basically Fimbulvetr's dumber, hornier sister. Seems popular, and from my testing it's got decent prose.
Silver Sun - This is a merge of a merge that's good for uncensored storytelling. Less popular than the other two, but seems quite good from my limited use.
Note: The 3 models above are all 11b and use alpaca formatting, so the guide should be the same regardless of your choice. I downloaded the Q6 version, but you can use the calculator to figure out which one fits your memory best. Bigger models give better results at the cost of file size and VRAM usage. As far as I read, Q4 is when the models get noticeably dumber.
Prompts:
The Club - If you're on /aids/ you know this site. Most prompts are written in NAI's format, where you get the start of the story and some background/lore in the memory section with a bunch of [ square bracket ] stuff we don't need. Since Kayra's a 13b model, chances are stuff that works from here will work for our 11b model.
The Chub - If you're familiar with chatbots, then you know this site. Cards are written for instruct/chat purposes, which the models I linked are able to follow, but you can also convert them into prompts if you prefer to write stories. Our models probably won't be smart enough to handle the fancy formatting and complicated rules of these cards.
Note: You can just type whatever into kobold and generate a story when you've loaded a model. I mean, I think this is obvious, but I wanted to make sure people didn't assume an external prompt was necessary.
Getting it Working
- Make a folder somewhere. Probably outside the windows-controlled folders—if you want to follow how I did it, I just made one on my desktop.
- Drag kobold and your models into the folder—maybe make another folder for the models, to keep things organized?
- Double-click koboldcpp.exe to bring up the launcher, and tweak a few settings (assuming you're using at least a 3060 like me):
- Quick Launch tab settings:
- Under presets, "Use CuBLAS"
- Everything's unticked EXCEPT "Launch Browser" and "Use ContextShift"
- 49 GPU layers
- Context Size: 4096 is the native amount, but it can be RoPE scaled higher at a risk of brain damage. Again, the calculator can let you know how much context your GPU can fit.
- Hardware tab settings:
- I enabled "High Priority"
- Quick Launch tab settings:
- Load the .gguf model you downloaded and Launch (it should use your default browser, but I set it up to use Chrome for compatibility. A firewall popup appears when you first launch, but I didn't see the need to allow it access to the firewall since it's run locally.)
- Under KoboldAI Lite's Settings tab:
- Quick Presets: I got decent results out of the box with Pro Writer 13b, though Fimbulvetr recommends "Universal Light" from SillyTavern. I tried replicating those sliders in Kobold and will post a screenshot if you want to customize.
- Format: Any of them work. If you use "Instruct Mode," then you'll want to use the Alpaca preset.
- All done.
Screenies
Here's how my settings look in the launcher:
Here's my attempt at replicating "Universal Light" for a custom preset"
Conclusion
I consider this guide a provisional measure until someone more tech-savvy steps up. As of the time of writing (May 3rd), There haven't really been any impressive Llama 3 8B finetunes. I imagine that a better guide will be made when that happens? For now, please enjoy these frankenmerges presented by a techlet who barely knows the difference between RAM and VRAM.
Feel free to blast me in the threads if you see any errors, I'll probably be lurking even if I don't reply. As far as updates go, I don't think I'll change much; if this becomes too outdated I'll just delete it. If you disagree about the chosen model/program/settings, you can write your own guide and it'll likely be better than mine.
I might add a section on SillyTavern if I decide to start playing with chatbots, but it's pretty easy to figure out yourselves I think: just use "Text Completion" as your API, KoboldCpp as your API type, and paste your url into the API URL (default is http://localhost:5001/). Then slap on Universal-Light preset, and Alpaca formatting. Boom, bam, done.
Edit History:
May 3rd: Initial version.
May 10th: Shamed into adding a bit to the "Models" section, specifying that you don't have to download all the files. Just a single .gguf,