SillyTavern Guide for noobs and weebs (like me, hello)
Disclaimer: I'm a regular laptop user without GPU. 16GB RAM Ryzen 9 5900X, 8 core. Never tried hosting my own models yet.
This is also not a specific guide to setup image generation for ST.
This is sharing my experience on getting SillyTavern to run first before anything else ><
DOCS & GUIDES
Official one, easiest to follow 👉 https://docs.sillytavern.app/installation/windows/
I suggest you use Github Desktop to pull SillyTavern's installer. Has a GUI. Way easier than cmd prompt.
Make sure you have your Github account logged into a browser.
noob friendly tutorials
- How to Github https://www.youtube.com/watch?v=CAwStH0ay-M
- Basic install and explanations https://www.youtube.com/watch?v=yt2FWJt9h1U
- Kobold https://www.youtube.com/watch?v=lPj97Su_HOI
- Memory for chats (later) https://www.youtube.com/watch?v=wOVZ67HuL1Q
- SillyTavern for Android via Termux (only if you're interested haha, get termux via apk outside of playstore) https://www.youtube.com/watch?v=NtltHQN3QQo
A. DEPENDENCIES
When in doubt, use exe and msi installers (´・ω・`) Github is a cooler way to quickly catch up with new releases.
An approximate of what you may need:
Official sites: press big shiny download on top of page
- Python latest install. https://www.python.org/downloads/
- Node JS https://nodejs.org/en
- Git for Windows https://gitforwindows.org/
B. COMMAND PROMPT part
LOCAL MODEL DEPENDENCIES (I use Ollama for R1 distill and mxbai embedding, which is kind of for my Qvink / vector summaries. You, local LLM users, may be able to use it for cooler things)
- Get ollama https://ollama.com/download , just install as is. To run it, use Powershell.
- Two useful commands:
ollama --help
to see all commands,ollama pull [MODEL ID NAME]
- ⚠️Kobold, the thingy to run yourself a local server for hosting uncensored models. https://github.com/YellowRoseCx/koboldcpp-rocm/releasesWorks This should be a branch for fancy GPU x AMD. You need this side by side with ollama (Btw I didn't bother setting this up yet bc I got stuck with CUDA incompatibility for some reason, so, uncharted territory. But try your luck! Followed the kobold tutorial above.)
C. ACTUAL SILLYTAVERN SETUP
- Copy your SillyTavern folder address from File Explorer (From here you can actually like, double click start.bat in THAT folder instead of the rest of these steps lol)
- Run Command Prompt (DON'T use admin access!!!)
- Navigate to the folder by typing
cd [ADDRESS HERE]
without brackets. Looks likeD:\bla\bla\SillyTavern
or something. - Once it moves directories, type start.bat.
- BE PATIENT! It will open your browser once it's done. Will show
localhost:8000
or similar, in the URL bar (that's how you open it, by the way!)
If you see the ST interface, congrats! You're in. Now customize that baby with some vanilla plugins in Extensions (7th Tab). "Download Extension and Assets", click the red button, pick what you want :D
There's probably a tutorial on how to change the skin of ST's windows, but you can do it later.
HOW TO USE API KEYS (2ND TAB)
It's the same for most websites like openrouter, ollama, featherless... there's always a "keys" menu somewhere. Generate your API key, then copy it. Then you gotta set it up.
Basic setup
API: Chat Completion (usually.)
Completion Source: DEPENDS ON PROVIDER! Eg Openrouter, Deepseek, OpenAI, Google AI Studio, or *Custom or KoboldAI**. Except from Kobold, are basically your no-server model hosting providers, for some you pay, others you don't. Oprouter and AI studio are free with some request limits per day, read up on it later.
*Custom Endpoint (Base URL): DEPENDS ON PROVIDER! They always look something like https://openrouter.ai/api/v1/ , https://api.featherless.ai/v1 , https://api.arliai.com/v1/models . Check each provider's documentation.
API key: Paste your key here
Model ID: Always something likemeta-llama/Meta-Llama-3.1-8B-Instruct
,deepseek/deepseek-chat-v3-0324:free
, etc
Click "Connect" then "Test Message" just in case. If errors show, google or ask.
*Usually for custom places like Huggingface, Featherless or Arli AI, afaik.
**Only when you start hosting by yourself.
IMPORT CHARACTER CARDS (RIGHT MOST TAB)
Download some character cards and import them. chub.ai or search people's presets on Reddit, good place to start. Load them in via the Character Card menu, right most tab if you're still using default ST.
If you were a former CAI user, use the CAI Tools browser extension to download your old chats and "steal" ppl's characters lol. Not gonna detail how to use it, kind of plug and play for most cases.
IMPORT LOREBOOKS (4TH TAB)
Lowkey helps later with narrative style. These were my chosen starters. Pick ONE at a time. https://chub.ai/lorebooks/mochacow/narration-styles-5620015c7698
RESPONSE FORMATTING (3RD TAB)
This affects how the AI reads your prompts. A lot. So depending on your model, you can find the right preset for it. :3 I happen to use this R1 thingy under system prompt https://rentry.co/5mrgx5fn (sorry forgot OP thread)
SAMPLER SETTINGS (1ST TAB) and REGEX (7TH TAB > REGEX)
If you end up using openrouter, you gotta modify your sampler settings to not get blacklisted early lmao. Deepseek 0324 is the chillest, most unhinged one I talked to so far, bro's like CAI without age restrict.
Read their guide. Sometimes could use update
https://rentry.org/cherrybox#install
https://pixibots.neocities.org/#prompts/minnie | https://rentry.org/AiBrainPresets | rentry.co/xo-nara
[https://huggingface.co/Sukino/SillyTavern-Settings-and-Presets]
Uneducated guess is that finetune models are based off of standard models anyway, so having a few starter settings *may* benefit you in the long run, to learn setting one up for yourself. Like the parameters and base prompts to write in.
Probably you can read up about sampling in other people's threads in this subreddit, just search "temp" "min P" "penalty" etc.
https://www.reddit.com/r/SillyTavernAI/comments/1aonyir/dynamic_temperature_minp_and_smoothing/
https://www.reddit.com/r/SillyTavernAI/comments/1846it5/which_are_the_best_settings_for_temperature/