Samplers Settings and You - A Comprehensive Beginner Guide
I've been messing around with LLMs for a while now, and I've acquainted myself with a lot of the samplers that exist out there. Given the obtuse nature of how samplers work and the lack of visual guidance, it can be difficult to know what each one does and how it will affect a model's responses. With this in mind, instead of diving into the raw technicalities of how each sampler works—which you can easily find online if you so choose (and I encourage doing so if you're curious!)—I will be instead trying to explain each sampler I personally believe has strong enough merit for RP and storywriting usage in a simpler, more down to earth manner. This guide can apply for pretty much every model out there as a baseline.
Please note that I have not tried out every sampler that exists, and I will simply be sharing what I know works best for my usage. If you have a sampler you like to use that isn't listed here, then feel free to keep using it! I wanted to keep this list to the samplers I personally find to make the strongest differences, or those that differentiate enough from the essentials.
Before You Start
Before trying out or testing any new model, I like to neutralise all existing samplers. This will give you the base model's response without any modification, and is a perfect way to get started. For testing, seeded generations are very powerful, and can help you isolate how a sampler changes the output in a more controlled manner. I suggest employing both seeded and random gens to help dial in samplers that work for your use case. If you want a base template to start from, here's one that's been completely neutralised with the correct sampling order for you.
Simpler is better—when adjusting samplers, it's best to keep your active samplers to a minimum. This way, you avoid overcomplicating your setup, and will also be able to properly see how small changes can affect the output. Less is more here. As a result, I suggest starting with merely Min P, temperature, and DRY, which will be explained below.
An incredibly useful, interactive guide to how common samplers affect the token probabilities can be found here. I recommend using this website in conjunction with this guide and tweaking the values as you go. This will give you a great visual on what's actually happening.
Make sure to set temperature as last in your sampling order. We want important truncation samplers like Min P to affect the token probabilities before temperature has a chance to modify them further. This allows you to get away with much higher temperature values that you might initially realise. Speaking of...
Essential Samplers
Min P
I believe this to be by far the most important sampler that currently exists. This sampler will, more than any other, affect the coherency and creativity of your model's output.
The best way to think of min P is as a barrier of entry that will cull off all the garbage tokens from the distribution. Without this sampler (or similar ones), you will be allowing all possible tokens (including the terrible ones) a model has to pass through. For RP and storywriting, you want your Min P value to be as low as you can possibly get away with. This way, you try to keep as many good, coherent tokens in the distributions as possible, while cutting off the absolute garbage ones. This is essential for creativity and variety. The key is to find where that cut off point is for your needs. I have simple rule of thumb I like to use when adjusting min P. Starting at a default value of 0.02:
Boring, uncreative gens = lower the min P by 0.005.
Unhinged, incoherent gens = raise the min P by 0.005.
Using this simple rule of thumb, you can gradually tweak and refine your min P value to something that you can be personally happy with.
Temperature
Temperature is simply a sampler that will change the probabilities of all the tokens in the distribution. A temperature of 1 is the neutral value that a model is trained on: lower temperatures will result in more deterministic and predictable responses, while higher temperatures will result in less deterministic, more creative responses.
This is one of the easiest samplers to adjust and see meaningful results from. Note that increasing the temperature doesn't actually change the variety of tokens in the response (that's Min P's job)—it merely smooths out the probabilities so the less likely tokens in the distribution will have a higher percent chance of appearing. Using the interactive guide posted above, it's easy to see what adjusting the temperature does to the probability chart.
My rule of thumb for temperature is similar to min P, so adjust it with the same methodology. Note that if you adjust your temperature, you will probably want to reconsider your min P value so your ideal token cut off point matches the increased token percentages. With Min P sampled first, you can actually have a ton of room to experiment increasing temperature, as long as you appropriately adjust your Min P value in response. Even high temperatures like 1.8 or higher can still be coherent.
I personally find increasing temperature breaks a lot of existing patterns and repetition in a model, as it will encourage different responses from the norm. Some models are unfortunately very prone to get stuck in habits with too low of a temperature (and if a model is especially confident, then even higher temperatures will struggle).
DRY Rep Penalty (Don't Repeat Yourself)
Models are fundamentally strong pattern-seeking machines. This is great for following the context and chat history of your roleplays and story writing sessions; however, there's a very negative side effect that comes alongside this—repetition. You can consider DRY to be the modern successor to traditional repetition penalty samplers, but instead of applying a sledgehammer approach to breaking repetition by cancelling tokens outright, DRY will instead penalise based on repeating sequences. This leaves less room for error and will allow more good tokens to appear in the probability distribution.
The default values as suggested by DRY's author are extremely solid and good values to start with. They are:
0.8 DRY Multiplier, 1.75 DRY Base, 2 DRY Allowed Length, 0 Penalty Range
Most of these default values will be fine for the majority of people. I myself never change the DRY multiplier or the DRY Base, as I find them to be very well-tested defaults. The main value I would adjust is the DRY Allowed Length parameter (I set mine to 4). The default value of 2 will allow a sequence of 2 tokens or below to not be considered for DRY penalties. This is fine for a lot of cases, but I've found that such a low value will result in many proper nouns and longer words getting butchered as DRY will penalise them too harshly. My rule of thumb is:
Broken words or grammar = raise Allowed Length by 1 until you stop noticing issues.
Stubborn repetition = lower Allowed Length by 1 until you see some improvement.
Penalty Range is simply how much of the context range DRY will penalise. A value of 0 means the entire context will be considered. If you find this to be too much, you may add a more appropriate value here. 1024-2048 (tokens) seems like a good place to start. This will still prevent immediate repetition problems, but also avoid potentially breaking important patterns too much.
Sequence Breakers are sequences of tokens that will be completely excluded from DRY's algorithm. You will want very common strings—think certain punctuation and special model tokens—set here. It's important that you set the correct ones here, depending on your model, to avoid potentially breaking responses. Since it can be difficult to find the correct strings to add here, you may use the following ones depending on your model to get you started:
For ChatML-based (Qwen) models
For Llama3-based models
For Metharme-based models
For modern Mistral-based models
If you wish to add the correct sequences for models that use instruction templates not listed here, simply add the model's various special tokens as sequence breakers, following the formatting from any of the above templates. It can be tricky to find the correct tokens, but a good place to start looking is on a particular model's HuggingFace page.
Note: For character cards that have stat tracking or similar mechanics, I would turn off DRY completely, as it can interfere with these mechanics quite severely.
Optional Samplers
These samplers are not essential for an optimal LLM experience, IMO. You can safely ignore this section if you're a beginner. Still, they can have merit, and some models will benefit from them. If you want to try any of these out, make sure not to use one that conflicts with or achieves the same thing as another sampler (remember: simpler is better). I'll detail what conflicts with each entry.
Dynamic Temperature
Conflicts with: Temperature
This one was difficult for me to understand initially. Instead of setting a static temperature value, you can use dynamic temperature to set a temperature range that will increase or decrease the temperature based on how deterministic a token is. If a token is very deterministic, dynamic temperature will try to use a higher temperature value to make a model consider more creative alternatives. If a token has many different outcomes, dynamic temperature will scale down the temperature to attempt to make a model more coherent.
I find this parameter works very well as an alternative to temperature. If you wish to use it, I've found setting the following values to work well:
1 minimum temp, 1.2-2.0 maximum temp, exponent 1.
Minimum and maximum temp are self-explanatory. I would initially increase the maximum temperature to the same value that you set with your static temperature, and increase it in small increments until you see coherency issues. A minimum temp of 1 is a very safe value.
Exponent is tricky to understand without visual aid. I suggest looking at this post to get an idea of what changing the exponent does to the curve. I personally prefer the simplicity of a linear curve as found with an exponent of 1, but you may adjust this if you want to experiment.
XTC (Exclude Top Choices)
Conflicts with: Nothing, but try neutralising your Temperature
XTC is a controversial sampler, and it's understandable why. The most basic way to describe what it's doing is to think of it as a random chance to exclude the most probable tokens (according to a value) from being generated. What this does in practice is apply a sledgehammer approach to deal with overly confident responses and patterns. This can result in great creativity and prose as it will break the most common sloppy patterns a model usually has to offer (slop tends to be overrepresented in the top tokens). Unfortunately, it can also make a model feel incoherent or unable to track details properly, as the most confident tokens also tends to be the most logical. A visual is provided here.
Less is more with XTC. I would start with a value of 0.15 Threshold and 0.3 Probability, and adjust according to your taste.
Threshold is simply the cut off value for where tokens start getting excluded. A value of 0.15 means the value starts at the top 15% of tokens; anything except the least probable token above it will get discarded in favour of the tokens below it.
Probability is the chance itself. A value of 0.3 equates to 30%, so it should be easy to adjust this value accordingly.
Traditional Rep Pen
Conflicts with: DRY
Although DRY is a more modern take on repetition penalties, there is still merit to traditional repetition penalty settings. Traditional repetition penalty will better respect existing formats, which makes it more ideal for stat tracking, codeblocks, and the like. There's also services and backends that simply do not support DRY, in which case you will want to use traditional rep pen instead.
Modern models don't need much rep pen. I would suggest values between 1.03-1.08. Any more and you'll risk blunting your responses too much. Set the value to as low as you can get away with while avoiding signs of repetition in the model's responses.
Guide written by Geechan. Suggestions? Corrections? Feel free to contact me on Discord!