Sukino's Findings: A Practical Index to AI Roleplay
Finding learning resources for AI roleplaying can be tricky, as most of them are hidden away in Reddit threads, Neocities pages, Discord chats, and Rentry notes. It has a lovely Web 1.0, pre-social media vibe to it, with nothing really indexed or centralized and always something cool buried somewhere you haven't discovered yet.
To make things a little easier, I've compiled a list of interesting, up-to-date information about it. Think of it as a crash course to help you get a modern AI roleplaying setup, understand how everything works and where to find things.
Want to know more? Check out my Guides page, where I share botmaking tips and little quality of life things I have discovered. If you have any feedback, want to talk, make a request, or share something, reach me at: [email protected]
or @sukinocreates
on Discord.
Latest Updates:
2025-04-14 — Still slowly building theImage Generation
section, but today I updated theIf You Want to Use an Online AI
section with the main providers and new information, and theLocal LLM/Models
section with more recommended models.
2025-04-10 — For the last few weeks I've been making small additions to the index as I come across something interesting, but I haven't documented any of them, sorry. There are new things in several sections. Highlights forLawliot's Local LLM Testing (for AMD GPUs)
andBaratan's Language Model Creative Writing Scoring Index
on the LLM section, theSpookySkelly's The Graveyard
index, and more presets I found and people recommended. Today I am also starting anImage Generation
section, which I will slowly build up over the next few days.
2025-03-22 — I made conversions of Deepseek presets for Text Completion connections, check them out. Added warning that Gemini got worse at roleplaying lately.
2025-03-20 — Addeddebased-AI
andtheatreJB
presets. addedWyvernChat
to character card providers.
2025-03-18 — TheLocal LLM/Models
section has been reworked with a new tool, but the information itself is the same.
Getting Started
Picking an Interface
First, you will need a frontend, the interface where the roleplaying takes place and where your characters live.
I will only recommend solutions that are open source, private, secure, well maintained, and don't lock you into a closed ecosystem. So if you've heard of a service that's not listed here, it's probably because it doesn't meet these criteria.
- Install SillyTavern: Repository · How to Install · Easier Guide — SillyTavern is the go-to frontend for AI roleplay. While there are alternatives, it’s the most feature-rich, actively developed, and customizable option, with broad system support and a strong community. It runs on Windows, Linux, Mac, Android, and Docker. iOS users, check the workaround below.
- Access SillyTavern Remotely Via Tailscale: How to Install · Easier Guide — Tailscale creates a secure, private tunnel between your devices, like a LAN, but over the Internet. This allows you to host SillyTavern on one device and access it from any other, anywhere with an Internet connection. You can even share it with your friends. It's the best way to keep it in sync with your PC and phone, and practically the only way to use it on iOS (as long as you have an always-on device to host it on, like a PC, old phone, Raspberry Pi, or home server). If you're tech-savvy, you can also rent a inexpensive VPS to run it remotely, it's pretty lightweight.
- Or Use Agnastic: Official Page — If you can’t install SillyTavern or just want a simpler online option, Agnastic is shaping up to be a solid alternative. It’s free, runs entirely in your browser, and doesn’t require an account. It even includes some free models to try out—though better free options are covered in the next section, so don’t choose it just for that.
- or RisuAI: Official Page — Another online alternative. Has a different set of features than Agnastic, and some users find the UI more friendly, so it may be more to your liking.
There's nothing stopping you from starting with these online frontends and later migrating to SillyTavern if you feel the need for a more complete solution. Just keep in mind that you'll miss out on most of the modern and advanced features, and that most of the content and setups you find online won't apply to you.
Throughout this guide, I'll assume you're using SillyTavern, but the instructions should be easily applicable to the alternatives—you'll just need to look for the equivalent options.
Setting Up an AI Model
If You Want to Run an AI Locally
It's uncensored, free, and private. Requires a computer or server with a dedicated GPU or a Mac with an M-series chip. If you don't know if you have dedicated GPU, Google or ask ChatGPT for instructions on checking for your system.
There are two main local AI model formats to pick from, GGUF
and EXL2
. If you don't have a preference yet, go with GGUFs, they are easier to find, easier to use, and have more sizes to fit all memory sizes.
You'll need a backend, the program that will run your AI models and connect to your frontend via a local API. Choose one and go pick a model and a suitable preset.
- KoboldCPP: Repository · Configuration Guide — Runs GGUF models. Don't know what to pick? Go with this one. Designed with roleplaying in mind, so it has some exclusive features for us roleplayers that will come up later in the guide. Comes with its own roleplaying frontend that you can use if you want to, but you don't have to interact with it. Read the notes on the release page to know which version you need to download.
- TabbyAPI: Repository · Installation Guide — Runs EXL2 models. Probably will be the most performant if you have enough VRAM to run everything smoothly.
- LM Studio: Official Page — Runs GGUF models. Pretty barebones, but has it's fans for how easy it is to use, and for being able to download and manage the models within it's UI.
- TextGen WebUI/Oobabooga: Repository · Installation Guide — Runs GGUF and EXL2 models. The most versatile and it's strength is having the best integrated UI to chat with the AI model.
If You Want to Use an Online AI
This is where censorship and privacy become an issue, as you will be sending everything to these services, and they can log your activity, block your requests, or ban you at will. Stay safe, use burner accounts if you feel like it would be bad to have your sessions tied to your name, and be careful not to accidentally send sensitive information, as most of the time your data will be used to train new AI models.
Note that you are free to switch between AIs during a roleplaying session, so even if you reach the limits of these APIs or they become too expensive, you can simply use another model for a while.
You'll need a service that provides the AI model of your choice and an API key to connect to it with your frontend. Choose a service and go pick a suitable preset.
- Free Options: These change all the time, but I will try to keep this updated with the options I know of.
- Gemini on Google AI Studio: API Key · Rate Limits — There are several Gemini models, and they are updated frequently, so their quality for roleplaying is constantly changing. Has strict security checks, so a good preset is essential, and you may still get refusals. Requires a Google account, and unless you're in the UK, Switzerland, or the EEA, your information will be collected and used for training purposes; well, it's Google, can't expect much else.
- Deepseek on OpenRouter: deepseek-r1:free · deepseek-chat-v3-0324:free · Rate Limits — R1 is the flagship thinking model, and V3 is the non-thinking model. Requires opting into data training. If your balance is under 10 credits (1 credit = 1 US dollar), you're limited to 50 requests/day. With 10+ credits, the limit jumps to 1000. This quota is shared across all models tagged
:free
, so while Deepseek is the top choice, you can try other models. If you top up credits, be careful not to accidentally use paid models, or you'll need to add more. Some users report problems with theChutes
provider; if you have problems with repetition, incoherence, and swipes with the same output, force OpenRouter to use a different provider. - Mistral on Le Plateforme: API Key · Rate Limits — Mistral Large 2411 is their best model. Requires opting into data training and may ask for phone number verification.
- Command on Cohere: API Key · Rate Limits — Command-A and Command-R+ 104B (not 08-2024) are their best models.
- Free LLM API Resources: List on Github — Consistently updated list of revendors offering access to free models via API. However, you cannot verify the real quality of the models; they may provide a very low-quality version to free users.
- KoboldAI Colab: Notebook on Google Colab — You can borrow a GPU for a few hours to run KoboldCPP at Google Colab. It's easier than it sounds, just fill in the fields with the desired GGUF model link and context size, and run. They are usually good enough to handle small models, from 8B to 12B, and sometimes even 24B if you're lucky and get a big GPU. Check the section on where to find local models to get an idea of what are the good models.
- AI Horde: Official Page · FAQ — A crowdsourced solution that allows users to host models on their systems for anyone to use. The selection of models depends on what people are hosting at the time. It's free, but there are queues, and people hosting models get priority. By default, the host can't see your prompts, but the client is open source, so they could theoretically modify it to see and store them, though no identifying information (like your ID or IP) would be available to tie them back to you. Read their FAQ to be aware of any real risks.
- Paid Options: Most of these options work on a pay-per-request model, so the more you play, the more expensive it gets. Be careful with some services, they can quickly turn into a money sink.
- Corporate Models: Generally the smartest models we have today and will give you the best experience you can get.
- Claude: Official API · AWS API — State of art, the best experience you can get, but way too expensive.
- Deepseek: Official API · Pricing — The economical option, with a few quirks. Use the official API, it's way cheaper than any other provider, with off-peak discounts and context caching, so you don't have to pay for tokens you've already sent, making long sessions really affordable; it's already enabled by default, but you can read the documentation to learn how it works.
- GPT: Official API · Azure API — The one everyone knows, not as good as Claude, better than Deepseek. Don't buy a ChatGPT subscription, it won't give you an API key, so it can't be used with AI roleplaying interfaces.
- Grok: Official API · Pricing
- Jamba: Official API · Pricing
- Revendors: There are vendors that provide you with access to the corporate models and some of the same AI models that people run locally, at every price point. The most famous is OpenRouter, but you can find alternatives if you shop around, including cheaper and subscription-based ones. Here are a few resources to help you find some of them, but these are by no means the only options out there, so do your research as well:
- A Primer on Model Hosts: https://rentry.org/modelhostprimers
- Corporate Models: Generally the smartest models we have today and will give you the best experience you can get.
- /aicg/ meta: https://rentry.org/aicg_meta — Comparison of how the different services/models perform in roleplay. Don't take this as gospel, they vary depending on the preset and bots you use, but it can help you set your expectations for what you can pay for.
Your model's provider/proxy isn't available via Chat Completion in your frontend?
You'll need to find out if they offer an OpenAI-compatible endpoint. Basically, it mimics the way OpenAI's ChatGPT connects, adding compatibility with almost any program that supports GPT itself. Check their documentation looking for anendpoint address
, it should look something like thishttps://api.provider.ai/v1
. If they have one, selectCustom (OpenAI-compatible)
as your chat completion provider, and manually enter that address and your API key. If the model list loads, you are golden, just select the right model there.
Where to Find Stuff
Chatbots/Character Cards
Chatbots
, or simply bots
, come in image files, or rarely in json files, called character cards
. The chatbot's definitions are embedded in the image's metadata, so never convert it to another format or resize it, or it will become a simple image. You simply import the character card into your roleplaying frontend and the bot will be configured automatically.
- Chub AI: https://chub.ai/ — This is the primary hub for chatbot sharing, but it's overwhelmed with frustratingly low-quality bots. It's hard to find the good stuff without knowing who the good creators are. So, for a better experience, create an account and follow creators whose bots you enjoy.
- Chub Deslopfier: https://gist.github.com/khanonnie/b357f20bfe4e920d8e05fd47f1e6fa75 — Browser script that tries to detect and hide extremely low quality cards.
- Chatbots Webring: https://chatbots.neocities.org/ — A webring in 2025? Cool! Automated index of bots from multiple creators directly from their personal pages. Could be a great way to find interesting characters without drowning in pages of low-effort sexbots on Chub. I mean, if the creator went to the trouble of setting up a website to host their bots, they must be into something, right?
- Anchored Bots: https://partyintheanchorhold.neocities.org/ — Consistently updated list of bots shared on 4chan without having to access 4chan at all, what a blessing.
- The meta list of various bot lists from different boards: https://rentry.org/meta_bot_list — More 4chan bots.
- WyvernChat: https://app.wyvern.chat/ — A strictly moderated bot repository that is gaining popularity.
- Character Tavern: https://character-tavern.com/ — Community-driven platform dedicated to creating and sharing AI Roleplay Character Cards.
- AI Character Cards: https://aicharactercards.com/ — Promises higher-quality cards though stricter moderation.
- RisuRealm Standalone: https://realm.risuai.net/ — Bots shared through the RisuRealm from RisuAI.
- JannyAI: https://jannyai.com/ — Archive of bots ripped from JanitorAI. If you are a migrating user, this may be of interest to you.
- PygmalionAI: https://pygmalion.chat/explore — Pygmalion isn't as big on the scene anymore, but they still host bots.
- Character Archive: https://char-archive.evulid.cc/ — Archived and mirrored cards from many various sources. Can't find a bot you had or that was deleted? Look here.
- Chatlog Scraper: https://chatlogs.neocities.org/ — Want to read random people's funny/cool interactions with their bots? This site tries to scrape and catalog them.
Local LLM/Models
Figuring Out Which Models You Can Run
Want to run a model locally, but are confused by all those names and numbers? No worries! Here's a quick crash course, plus two tools that will help you find the perfect model. First, you just need to understand these four key concepts:
Total VRAM
is the memory you have available in GPU, your graphics card. This is different than your RAM memory. If you don't know how much memory you have, or if you have dedicated GPU, Google or ask ChatGPT for instructions on checking for your system.- In roleplay, the
Context Length
is how many past messages the AI can hold in memory, measured intokens
, between a syllable and a word.8192
tokens is pretty good; users generally prefer16384
for long roleplaying sessions, but you may need to choose a worse model to be able to fit everything in your GPU. An oversized context is useless if your model can't use all the information, so don't go beyond 16K for now, as most models compatible with common domestic hardware can't use it effectively. - Models have sizes, calculated in billions of parameters, represented by a number followed by
B
. Largermodel sizes
are generally smarter, but not necessarily better at roleplaying, and require more memory to run. So, as a rule of thumb, a model with 12B parameters is smarter than one with 8B parameters. - Models are shared in various quantizations, or
quants
. The lower the number, the dumber the model gets, but less memory you need to run it. The best balance between compatibility and intelligence for AI roleplaying purposes is a GGUF IQ4_XS (or Q4_K_S if there isn't one available), or an EXL2 between 4.0~4.5 bpw.
Simple, right? Total VRAM, context length, model sizes, and quants. Now we will use this information with one of these two calculators:
- https://sillytavernai.com/llm-model-vram-calculator/ — This tool is the easiest to use. Just enter your Total VRAM and desired Context Size, then click Load Models to see a list of compatible options. Once it loads, sort by Total VRAM and find the highest number followed by B—this indicates the largest model your hardware can run smoothly at IQ4_XS or Q4_K_S. For example, if your system can handle an 8B model, you can run basically any model in that size range or smaller. But I suggest that you choose a
Default Recommendation
bellow instead of the ones suggested by the calculator, their algorithm favors older models not fine-tuned for roleplaying, as they are more widely used and have had more time to gather more reviews and downloads. - https://smcleod.net/vram-estimator/ — If you are a bit more tech-savvy, this calculator is pretty self-explanatory and will let you find the perfect model size and quant for your system. Just adjust the values until the
FP16 K/V Cache
bar fits into the available VRAM of your GPU.
Default Recommendations
These are the most commonly recommended models by 2025-04. They're not necessarily the freshest or my favorites, and there's no one best model for everyone, but they're tried and true. It's a good idea to test and keep a few models around for variety, as small local models can get repetitive over time and different models tend to have different flavors. Choose a model and go pick a suitable preset.
- 7B: SanjiWatsuki/Silicon-Maid-7B (Alpaca Instruct) — GGUF · EXL2
- 7B: SanjiWatsuki/Kunoichi-7B (Alpaca Instruct) — GGUF · EXL2
- 8B: Sao10K/L3-8B-Lunaris-v1 (Llama 3 Instruct) — GGUF · EXL2
- 8B: Sao10K/L3-8B-Stheno-v3.2 (Llama 3 Instruct) — GGUF · EXL2
- 12B: inflatebot/MN-12B-Mag-Mell-R1 (ChatML Instruct) — GGUF · EXL2
- 12B: LatitudeGames/Wayfarer-12B (ChatML Instruct) — GGUF · EXL2
- 12B: MarinaraSpaghetti/NemoMix-Unleashed-12B (Mistral V3 Instruct) — GGUF · EXL2
- 12B: TheDrummer/Rocinante-12B-v1.1 (ChatML Instruct) — GGUF · EXL2
- 22B: TheDrummer/Cydonia-22B-v1.2 (Metharme Instruct) — GGUF · EXL2
- 24B: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b (ChatML Instruct) — GGUF · EXL2
- 24B: TheDrummer/Cydonia-24B-v2.1 (Mistral V7 Instruct) — GGUF · EXL2
- 32B: Qwen/QwQ-32B (ChatML Instruct) (Thinking Model) — GGUF · EXL2
- 70B: Sao10K/70B-L3.3-Cirrus-x1 (Llama 3 Instruct) — GGUF · EXL2
- 70B: LatitudeGames/Wayfarer-Large-70B-Llama-3.3 (Llama 3 Instruct) — GGUF · EXL2
- 123B: TheDrummer/Behemoth-123B-v1.2 (Metharme Instruct)
- 123B: MarsupialAI/Monstral-123B (Metharme Instruct)
- Sukino's Banned Tokens: https://huggingface.co/Sukino/SillyTavern-Settings-and-Presets/blob/main/Banned%20Tokens.txt — This is a list of clichés and repetitive phrases to ban from your AI’s vocabulary using KoboldCPP's Anti-Slop feature. Try it out, it's easy to undo if you don't like it. Using this with other backends will mess up your AI responses instead. The list is still being updated, so check back from time to time.
Finding More Models
- HuggingFace: https://huggingface.co/models — This is where you actually download models from, but browsing through it is not very helpful if you don't know what to look for.
- Bartowski/mradermacher: https://huggingface.co/bartowski · https://huggingface.co/mradermacher — I don't know how they do it, but these two keep releasing GGUF quants of every slightly noteworthy model that comes out really quickly. Even if you don't use GGUF models, it's worth checking their profile to see what new models are released.
- Baratan's Language Model Creative Writing Scoring Index: https://github.com/Baratan-creates/-image-generation-tables#language-models — Models scored based on compliance, comprehension, coherence, creativity and realism.
- Lawliot's Local LLM Testing (for AMD GPUs): https://rentry.org/lawliot — Models tested on an RX6600, a card with 8GB VRAM, valuable even for people with other GPUs, since they list each models' strengths and weaknesses.
- HobbyAnon's LLM Recommendations: https://chub.ai/users/hobbyanon — Curated list of models of multiple sizes and instruct templates, along with an easy-to-follow tutorial for getting started with KoboldCPP.
- SillyTavernAI Subreddit: https://www.reddit.com/r/SillyTavernAI/ — Want to find what models people are using lately? Do not start a new thread asking for them. Check the weekly
Best Models/API Discussion
, including the last few weeks, to see what people are testing and recommending. If you want to ask for a suggestion in the thread, say how much VRAM and RAM you have available, or the provider you want to use, and what your expectations are.
Presets, Prompts and Jailbreaks
Always use a good preset that is appropriate for your model of choice. They are also called prompts
or jailbreaks
, although this name can be a bit misleading as they are not just for making these AI models write smut and violence — the NSFW part is usually optional.
LLM models are first and foremost corporate-made assistants, so giving them well-structured instructions on how to roleplay and what the user generally expects from a roleplaying session is really beneficial to your experience. Each preset will play a little differently, based on the creator's preferences and the quirks they found with the models, so try different ones to see which one is more to your liking.
Presets are listed by the model or instruct template with which they are compatible. If you're using a finetune and the instruct template isn't obvious from the model name alone, you can usually find that information on the model's original creator page.
Presets for Text Completion Models
To import these presets on SillyTavern, click on the AI Response Formatting
button, the third one with an A
in the top bar, and press the Master Import
button on the top-right of the window. Make sure the ones you downloaded are selected in the drop-down menus. Always read their descriptions to make sure you don't need to tweak any other setting.
- sphiratrioth666: https://huggingface.co/sphiratrioth666/SillyTavern-Presets-Sphiratrioth — Alpaca, ChatML, Llama, Metharme/Pygmalion, Mistral
- MarinaraSpaghetti: https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings — ChatML, Mistral
- Virt-io: https://huggingface.co/Virt-io/SillyTavern-Presets — Alpaca, ChatML, Command R, Llama, Mistral
- debased-ai: https://huggingface.co/debased-ai/SillyTavern-settings — Gemma, Llama
- Sukino: https://huggingface.co/Sukino/SillyTavern-Settings-and-Presets — ChatML, Deepseek, Gemma, Llama, Metharme/Pygmalion, Mistral
- The Inception: https://huggingface.co/Konnect1221/The-Inception-Presets-Methception-LLamaception-Qwenception — Llama, Metharme/Pygmalion, Qwen — This one is pretty big, so I wouldn't recommend for small models. Make sure your model is smart enough to handle it.
- CommandRP: https://rentry.org/4y1je_commandrp — Command R/R+
Presets for Chat Completion Models
Unlike text completion presets, this format is much more model agnostic. You can pick any of them and they will probably work fine, but they are almost always designed to deal with the quirks of specific models and to get the best experience out of them. So while it's recommended that you pick one that's appropriate for the model of your choice, feel free to shop around and experiment, or test your favorite preset on the "wrong" models.
To import these presets on SillyTavern, click on the AI Response Configuration
button, the first one with the sliders in the top bar, and a windows titled Chat Completion Presets
should pop up — If it has another name, you aren't connected via Chat Completion, fix it first. Now, just press the Import preset
icon on the top-right of the window, and make sure the one you downloaded is selected in the drop-down menu. Always read their descriptions to make sure you don't need to tweak any other setting.
- pixi: https://pixibots.neocities.org — Claude, Deepseek, Gemini
- momoura: https://momoura.neocities.org/ — Claude, Deepseek, Mistral Large (Outdated)
- AvaniJB https://rentry.org/avaniJB — GPT, Gemini
- MarinaraSpaghetti: https://rentry.org/marinaraspaghetti — Gemini
- MarinaraClaude: https://rentry.org/marinaraclaude — Claude
- SmileyJB: https://rentry.org/SmileyJB — Claude, GPT
- Pitanon: https://rentry.org/pitanonprompts — Claude, Deepseek, GPT
- XMLK/CharacterProvider: https://rentry.org/CharacterProvider — Claude, GPT
- Holy Edict: https://rentry.org/Writing_Style — Claude, GPT, Gemini
- Lumen: https://illuminaryidiot.neocities.org/presets — Claude, GPT
- Fluff: https://rentry.org/fluffpreset — Gemini
- DeepFluff: https://rentry.org/DeepFluff — Deepseek
- ArfyJB: https://rentry.org/ArfyJB — Claude, Deepseek, GPT
- CherryBox: https://rentry.org/CherryBox — Deepseek
- Quick Rundown on Large REVISED: https://rentry.org/large-qr-revised — Mistral Large
- kira's largestral: https://rentry.org/kiralargestralprompt — Mistral Large
- CommandRP: https://rentry.org/4y1je_commandrp — Command R/R+
- printerJB: https://rentry.org/printerjb — Claude, GPT
- Q1F V1: https://rentry.org/88fr3yr5 — Deepseek
- Minsk: https://rentry.org/minskhub — Gemini
- AIBrain: https://rentry.org/AiBrainPresets — Gemini
- theatreJB/hometheatreJB: https://rentry.org/rekeddeb#theatrejb — Claude, DeepSeek, Nemotron 70B
- Writing Styles: https://rentry.org/deepstyles — Deepseek
- Ashuotaku: https://github.com/ashuotaku/sillytavern — Gemini, Deepseek
- SillyCards: https://sillycards.co/presets.html — Claude, Deepseek, Gemini, GPT, Nous Hermes, Qwen-Max
Universal:
- Greenhu: https://greenhu.space/
- CYOARPG (CHOCORABBIT): https://rentry.org/CharacterProvider-CYOARPG
Your model's provider/proxy isn't available via Chat Completion in your frontend?
Go back to theIf You Want to Use an Online AI
section to learn how to add it.
Your model wastes time explaining itself before playing its turn?
It means that you are using a reasoning model. This new type of model will always "think" before writing its responses.
This reasoning step shouldn't be visible to you unless you open theThinking...
window above the model's turn.
If it is geting mixed with your bot's actual responses, make sure your frontend is updated to a version that actually supports reasoning models, and that support for them isn't disabled.
In SillyTavern, to find this option, click on theAI Response Formatting
button, the third one with anA
in the top bar, and expand theReasoning
section to enable theAuto-Parse
option.
You will see these pages talking about
Latte
from time to time, it is just a nickname forGPT Latest
.
SillyTavern Resources
Extensions
- EmojiPicker: https://github.com/SillyTavern/Extension-EmojiPicker
- Chat Top Info Bar: https://github.com/SillyTavern/Extension-TopInfoBar
- Input History: https://github.com/LenAnderson/SillyTavern-InputHistory
- Quick Persona: https://github.com/SillyTavern/Extension-QuickPersona
- More Flexible Continues: https://github.com/LenAnderson/SillyTavern-MoreFlexibleContinues
- Rewrite: https://github.com/splitclover/rewrite-extension
- Dialogue Colorizer: https://github.com/XanadusWorks/SillyTavern-Dialogue-Colorizer
- Greetings Placeholder: https://github.com/splitclover/greeting-placeholders
- Timelines: https://github.com/SillyTavern/SillyTavern-Timelines
- Tracker: https://github.com/kaldigo/SillyTavern-Tracker
- Stepped Thinking: https://github.com/cierru/st-stepped-thinking
- LALib: https://github.com/LenAnderson/SillyTavern-LALib
- Guided Generations: https://github.com/Samueras/GuidedGenerations-Extension
- Cache Refresh: https://github.com/OneinfinityN7/Cache-Refresh-SillyTavern
Themes
- Moonlit Echoes: https://github.com/RivelleDays/SillyTavern-MoonlitEchoesTheme
- ST-NoShadowDribbblish: https://github.com/IceFog72/ST-NoShadowDribbblish
- Greenhu: https://rentry.org/fbu2t24v
Quick Replies
- CharacterProvider's Quick Replies: https://rentry.org/CharacterProvider-Quick-Replies
- Guided Generations: https://github.com/Samueras/Guided-Generations
Novel Roleplaying Setups
- Proper Adventure Gaming With LLMs: https://rentry.co/LLMAdventurersGuide — AI Dungeon-like text-adventure setup. Interesting way to roleplay that is less focused on individual characters.
- SX-3: Character Cards Environment: https://huggingface.co/sphiratrioth666/SX-3_Characters_Environment_SillyTavern — A complex modular system to generate starting messages, swap scenarios, clothes, weather and additional roleplay conditions, using only vanilla SillyTavern.
Learning How To Roleplay
Basic Knowledge
- Local LLM Glossary: https://gist.github.com/kalomaze/4d74e81c3d19ce45f73fa92df8c9b979 — First we have to make sure that we are all speaking the same language, right?
- LLM Samplers Explained: https://gist.github.com/kalomaze/4473f3f975ff5e5fade06e632498f73e — Quick and digestible read to introduce you to the basic samplers.
- Samplers Settings and You - A Comprehensive Beginner Guide: https://rentry.co/samplersettings — A practical follow-up guide that introduces you to the modern samplers and helps you configure a streamlined sampling setup.
- Your settings are (probably) hurting your model - Why sampler settings matter: https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/ — They really are! A little more context on why you want to streamline your sampler settings.
- DRY: A modern repetition penalty that reliably prevents looping: https://github.com/oobabooga/text-generation-webui/pull/5677 — Technical explanation of how the DRY sampler works, if you are curious.
- Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition: https://github.com/oobabooga/text-generation-webui/pull/6335 — Technical explanation of how the XTC sampler works, if you are curious.
- LLM Samplers Visualized: https://artefact2.github.io/llm-sampling/ — Tool that lets you simulate what you've learned above. Play with the samplers and see how they affect the generated tokens.
- Samplers Settings and You - A Comprehensive Beginner Guide: https://rentry.co/samplersettings — A practical follow-up guide that introduces you to the modern samplers and helps you configure a streamlined sampling setup.
How Everything Works and How to Solve Problems
The following are guides that will teach you how to roleplay, how things really work, and give you tips on how to make your sessions better. If you are more interested in learning how to make your own bots, skip to the next section and come back when you want to learn more.
- Sukino's Guides & Tips for AI Roleplay: https://rentry.org/Sukino-Guides — Shameless self-promotion here. This page isn't really a structured guide, but a collection of tips and best practices related to AI roleplaying that you can read at your own pace. I recommend that you at least read the sections on how to use your turns and what to do when the AI writes something you don't like.
- Statuo's Guide to Getting More Out of Your Bot Chats: https://rentry.co/statuotwtips#generation-settings-and-you — Statuo has been on the scene for a long while, and he still updates this guide. Really good information about different areas of AI Roleplaying.
- How 2 Claude: https://rentry.org/how2claude — Interested in taking a peek behind the curtain? In how all this AI roleplaying wizardry really works? How to fix your annoyances? Then read this! It applies to all AI models, despite the name.
- onrms: https://rentry.org/onrms
- SillyTavern Docs: https://docs.sillytavern.app/ — Not sure how something works? Don't know what an option is for? Read the docs!
How to Make Chatbots
Botmaking is pretty free-form, almost anything you write will work, and everyone does it a little differently, so don't think you need to follow templates or formats to make good bots, plain text is more than fine...
- Character Creation Guide (+JED Template): https://rentry.co/CharacterProvider-GuideToBotmaking — ...That said, in my opinion, the JED+ template is great for beginners, a nice set of training wheels. It helps you get your character started by simply filling a character sheet, while remaining flexible enough to accommodate almost any character concept. Some advice in the guide seems a bit odd, especially on how to write an intro and the premise stuff, but the template itself is good, and you'll find different perspectives from other botmakers in the following guides.
- pixi's practical modern botmaking: https://rentry.org/pixiguide — Succinct guide to introduce you to some botmaking good practices, and to what kind of cards you can make.
- Demystifying The Context; Or Common Botmaking Misconceptions: https://rentry.org/Sukino-Guides#demystifying-the-context-or-common-botmaking-misconceptions — Hey look, it's me with a pretentious title. I think this article turned out pretty good. I give you some good practices and warn you about the pitfalls of botmaking.
- BONER'S BOT BUILDING TIPS: https://rentry.org/Bonersbottips — Still relevant as always. While this guide covers the same ground as mine, it is a classic, and its aggressive teaching methods may work better for you.
- How to Create Lorebooks - by NG: https://rentry.co/SillyT_Lorebook — A quick introduction to Lorebooks/World Info. They are a big step up for when you're ready to make your characters deeper and more complex.
- World Info Encyclopedia: https://rentry.co/world-info-encyclopedia — Learn more in-depth about Lorebooks, and how powerful they are.
- Give Your Characters Memory - A Practical Step-by-Step Guide to Data Bank: Persistent Memory via RAG Implementation: https://www.reddit.com/r/SillyTavernAI/comments/1f2eqm1/give_your_characters_memory_a_practical/ — Probably overkill for most people. This is to make your character have long-term memory. I've never experimented with RAG myself, but this guide at least made me understand what it is...sort of.
- Getting to Know the Other Templates: Again, don't think you need to use these formats to make good bots, they have their use cases, but plain text is more than fine these days. However, even if you don't plan to use them, these guides are still worth reading, as the people who write them have valuable insights into how to make your bots better.
- PList + Ali:Chat: This format was really popular before we got models with big contexts. It maximizes token efficiency by combining Python/JSON-style lists for defining character traits with example dialogues to lock in distinct narration and speech patterns. This dual approach is particularly powerful for keeping established characters true to form, expressing subtle personality traits through dialogue, or handling complicated speech patterns. While plain text descriptions can lead to loose interpretations, PList + Ali:Chat provides precise control over character behavior, and prevents your own writing style from bleeding into the character. Just consider whether the added complexity is worth the benefits for your specific use case.
- BONER'S ALI:CHAT GUIDE FOR FOR MORONS LIKE ME: https://docs.google.com/document/d/1PmU7-MA25P41Q45yU0CpA66Jra51LI-WI1PwSXn2FMs/edit
- Trappu's Bot Guide: https://wikia.schneedc.com/bot-creation/trappu
- MinimALIstic (Ali:Chat Lite): https://rentry.co/kingbri-chara-guide
- How to write in PList (Python list) + Ali:Chat: https://rentry.co/plists_alichat_avakson
- StatuoTW's Guide to Making Bots: https://rentry.co/statuobotmakie
- Ali:Chat Style: https://rentry.co/alichat
- W++: Honestly, this format has no redeeming qualities anymore, it is just an inferior PList — use it instead, or simply Markdown, if you want a structured list. But, as obsolete as it is, you will still see it around, from old cards, and people who still like to use it, so you might want to understand what it does.
- W++ For Dummies: https://rentry.co/WPP_For_Dummies
- Pygmalion Tips: https://rentry.org/pygtips
- Other Templates: Botmakers that shared their own templates.
- Shirohibiki's Bot Creation Template: https://rentry.co/shirohibikis-bot-template
- absolutetrash's Bot Guide and Templates: https://rentry.org/absolutetrashs-bot-guide
- PList + Ali:Chat: This format was really popular before we got models with big contexts. It maximizes token efficiency by combining Python/JSON-style lists for defining character traits with example dialogues to lock in distinct narration and speech patterns. This dual approach is particularly powerful for keeping established characters true to form, expressing subtle personality traits through dialogue, or handling complicated speech patterns. While plain text descriptions can lead to loose interpretations, PList + Ali:Chat provides precise control over character behavior, and prevents your own writing style from bleeding into the character. Just consider whether the added complexity is worth the benefits for your specific use case.
- Prompting:
- JINXBREAKS: https://rentry.org/jinxbreaks — Trying to make a crazy character but can't get it to behave the way you want? Maybe this page can help you get an idea of how to prompt it.
- sphiratrioth666's Character Generation Templates: https://huggingface.co/sphiratrioth666/Character_Generation_Templates — Nothing beats a handcrafted bot. But it's handy to be able to have the AI generate characters for you, perhaps to use as a base, or to quickly roleplay with a pre-existing character. These are prompts to be used on any model of your choice.
- Online Editors:
- AI Character Editor: https://desune.moe/aichared/
- Agnastic's Create a Character: https://agnai.chat/editor
- Sharing:
- Tagging & You: A Guide to Tagging Your Bots on Chub AI https://theunofficialguidetochubai.wordpress.com/2025/01/21/tagging-you-a-guide-to-tagging-your-bots-on-chub-ai/
- Tools:
- How to Scrap Janitor AIs Hidden Definition Character Cards Details: https://github.com/ashuotaku/sillytavern/blob/main/Guides/JanitorAI_Scrapper.md
Image Generation
I like to think of this part as an extension of the Botmaking section, since the card's art is one of the most crucial elements of your bot. Your bot will be displayed among many others, so an eye-catching and appropriate image that communicates what your bot is all about is as important as a book cover. But since this information is useful for all users, not just botmakers, it deserves a section of its own.
Guides
- Skelly's Necronomicon: Art Generating: https://rentry.org/Necronomiconart — An introductory guide that will help you learn the basic concepts and softwares used in image generation, with a focus on anime models.
- WTF is V-pred?: https://rentry.org/wtfvpred — There are currently two types of SDXL-based models, the classic EPS and the new V-pred. This guide will show you the differences between the two.
- Pony Diffusion XL
- Illustrious XL
- Bex's Stable Diffusion Tips and Tricks; and General Usage Guide for Illustrious Models: https://rentry.co/bextoper_illustrious_guide
- NoobAI-XL
- Introducing NoobAI V-Pred 1.0: The Fine Tune That Understands Lighting in Image Generation: https://digialps.com/introducing-noobai-v-pred-1-0-the-fine-tune-that-understands-lighting-in-image-generation/
- NoobAI-XL User Manual: https://d0xb9r3fg5h.feishu.cn/docx/YpOQdtHTDoetcZxIO9fc33onnee
- L_A_X's NoobAI-XL Quick Guide: https://civitai.com/articles/8962/noobai-xl-quick-guide
- Volnovik's NoobAI-XL Guides: https://civitai.com/user/Volnovik/articles?sort=Newest
Models
Currently, there are three main SDXL-based models competing for the anime aesthetic crowd. This is a list of these base models and some recommendations of merges for each branch:
- Pony Diffusion XL
- Base Model: https://civitai.com/user/PurpleSmartAI
- Recommended Merges
- AutismMix: https://civitai.com/models/288584
- Illustrious XL
- Base Model: https://civitai.com/user/ONOMAAI
- Popular Merges
- WAI-NSFW-illustrious-SDXL: https://civitai.com/models/827184
- Better Days: https://civitai.com/models/623891
- NoobAI-XL
- Base Model: https://huggingface.co/Laxhar
- Complementary Models — Despite the name, all NoobAI EPS ControlNet models work flawlessly with vPred models.
- ControlNet: https://civitai.com/models/929685
- ControlNet OpenPose: https://civitai.com/models/962537
- ControlNet Inpainting: https://civitai.com/models/1376234
- IP-Adapter: https://civitai.com/models/1000401
- Popular Merges
Resources
- Danbooru Tags: https://danbooru.donmai.us/wiki_pages/tag_groups · https://danbooru.donmai.us/related_tag — Most anime models are trained based on Danbooru tags. You can simply consult their wiki to find the right tags to prompt the concepts you want.
- Danbooru/e621 Artists' Styles and Characters in NoobAI-XL: https://www.downloadmost.com/NoobAI-XL/danbooru-artist/ — Catalog of artists and characters in NoobAI-XL's training data, with sample images showing their distinctive styles and how to prompt them. Even if you're using a different model, this is still a valuable page, since most anime models share many of the same artists in their training data.
- Danbooru Tag Scraper: https://gist.github.com/WasitSam37/7c8a2609858057bba20b654e4b8bb6fb — More updated list of Danbooru tags for you to import into your UI's autocomplete. Also has a Python script for you to scrape it yourself.
- AIBooru: https://aibooru.online/ — 4chan's repository of AI generated images. Many of them have their model, prompts and settings listed, so you can learn a bit more of many user's preferences and how to prompt something you like.
Other Indexes
More people sharing collections of stuff. Just pay attention to when these guides and resources were created and last updated, they may be outdated or contain outdated practices. A lot of these guides come from a time when AI roleplaying was pretty new and we didn't have advanced models with big context windows, everyone was learning and experimenting with what worked best.
- The meta list of various bot making guides: https://rentry.org/meta_botmaking_list
- Chub Discord’s List of Botmaking Resources: https://rentry.co/botmaking
- Bot-Making Resources for JanitorAI: https://rentry.co/jaibotmakingresources
- A list of various Jail Breaks for different models: https://rentry.org/jb-listing
- AICG OP template: https://rentry.org/aicgOP
- SpookySkelly's The Graveyard: https://rentry.org/SpookySkelly