LLM Sampling Parameters Explained
The parameters you've listed control how a language model generates text by affecting the randomness, creativity, and repetitiveness of the output. Let me explain each one clearly and help you visualize how they work at different values.
Core Sampling Parameters
Temperature
Temperature controls the randomness of predictions by scaling the logits (raw prediction scores) before applying softmax.
- Low temperature (0.1-0.3): Makes the model more confident and deterministic. The model strongly favors high-probability tokens, leading to more predictable and conservative text.
- Medium temperature (0.7-0.8): Provides a balance between determinism and creativity.
- High temperature (1.0+): Makes outputs more random and diverse. The model is more likely to select lower-probability tokens, leading to more surprising and creative text.
Visualize temperature as adjusting the "peaks and valleys" in the probability distribution:
- Low temperature: Sharp peaks, deep valleys (very concentrated probabilities)
- High temperature: Flatter landscape (more evenly distributed probabilities)
Top-k
Top-k sampling restricts token selection to only the k most likely next tokens.
- Low k (5-10): Very constrained outputs, highly focused on only the most probable tokens
- Medium k (40-50): Balanced approach
- High k (100+): Allows more diversity but still eliminates the long tail of very unlikely tokens
Visualize top-k as placing a hard cutoff on the number of words the model can choose from at each step, like limiting your vocabulary to only the k most obvious word choices.
Top-p (Nucleus Sampling)
Top-p sampling selects from the smallest set of tokens whose cumulative probability exceeds threshold p.
- Low p (0.1-0.3): Very constrained, only the most likely tokens
- Medium p (0.7-0.9): Balanced, preserves most of the probability mass while cutting unlikely options
- High p (0.95-0.99): More diverse, includes more lower-probability tokens
Visualize top-p as drawing a line at probability p and including only tokens above that line, working from most to least probable until hitting the threshold.
Typical-p
Typical-p selects tokens whose information content (negative log probability) is close to the expected information content of the distribution.
- Low typical-p (0.1-0.3): Focuses on very typical, expected tokens
- Medium typical-p (0.7-0.9): Good balance of typical and slightly unexpected tokens
- High typical-p (0.95+): Includes more surprising tokens
Visualize typical-p as selecting tokens that are neither too predictable nor too surprising, focusing on those with "average" predictability.
Min-p
Min-p sets a probability threshold below which tokens are excluded from sampling.
- Low min-p (0.01-0.05): Allows many low-probability tokens
- Medium min-p (0.1): Moderate filtering
- High min-p (0.2+): Aggressive filtering of unlikely tokens
Visualize min-p as setting a minimum bar for token probability – anything below this threshold is simply removed from consideration.
Top-a
Top-a dynamically adjusts the cutoff based on the probability of the most likely token.
- Low a (0-0.2): More focused sampling, similar to lower temperature
- Medium a (0.4-0.6): Balanced approach
- High a (0.8+): More diverse sampling
Visualize top-a as creating a dynamic threshold that changes based on how confident the model is about its top prediction.
Advanced Sampling Methods
TFS (Tail-Free Sampling)
TFS eliminates the "tail" of the distribution based on the second derivative of the cumulative distribution.
- Low TFS (0.1-0.3): More conservative, predictable text
- Medium TFS (0.5-0.7): Balanced results
- High TFS (0.9+): More diversity while still avoiding the extreme tail
Visualize TFS as intelligently removing low-probability tokens where the probability drops off sharply, keeping a more natural cutoff point.
Top-nsigma
Top-nsigma includes all tokens within n standard deviations of the mean log probability.
- Low nsigma (1-2): Very focused sampling
- Medium nsigma (3-4): Moderate diversity
- High nsigma (5+): High diversity
Visualize top-nsigma as drawing a bell curve and selecting only tokens that fall within n standard deviations of the center.
Repetition Control Parameters
Repetition Penalty
Repetition penalty reduces the probability of tokens that have already appeared in the text.
- Low penalty (1.0-1.1): Minimal repetition control, more natural-sounding
- Medium penalty (1.1-1.2): Balanced approach
- High penalty (1.3+): Aggressive prevention of repetition, can lead to less coherent text
Visualize repetition penalty as placing a growing "tax" on words the model has already used, making them progressively less likely to be chosen again.
Rep Pen Range
Rep pen range limits how far back in the context the repetition penalty applies.
- Low range (64-128 tokens): Only penalizes very recent repetitions
- Medium range (512-1024 tokens): Balanced approach
- High range (2048+ tokens): Penalizes repetition across a very large context
Visualize rep pen range as determining how much of the previous text the model "remembers" when avoiding repetition.
Rep Pen Slope
Rep pen slope controls how quickly the repetition penalty diminishes for tokens further back in context.
- Low slope (0.1-0.3): Penalty decreases slowly with distance
- Medium slope (0.7-0.9): Balanced decay
- High slope (1.5+): Penalty decreases rapidly with distance
Visualize rep pen slope as determining how quickly the model "forgets" about previously used words as it generates more text.
Frequency Penalty
Frequency penalty reduces the probability of tokens based on how frequently they appear in the generated text.
- Low penalty (0.1-0.3): Subtle effect on frequent words
- Medium penalty (0.5-0.7): Noticeable diversity boost
- High penalty (0.9+): Strong bias against repeating the same words
Visualize frequency penalty as a progressive tax that grows each time a word is used, unlike repetition penalty which applies to any repetition.
Presence Penalty
Presence penalty reduces the probability of tokens that have appeared at all, regardless of frequency.
- Low penalty (0.1-0.3): Minimal effect
- Medium penalty (0.5-0.7): Balanced approach
- High penalty (0.9+): Strong push for new vocabulary
Visualize presence penalty as the model keeping a checklist of words it has used and trying to avoid reusing them.
Min Length
Min length encourages the model to generate at least a specified number of tokens.
- Low min length (1-5): Minimal effect
- Medium min length (10-20): Encourages more developed outputs
- High min length (50+): Forces more detailed, longer outputs
Visualize min length as setting a minimum word count goal for the model to achieve.
Smooth Sampling Parameters
Smoothing Factor
Smoothing factor controls how aggressively the probability distribution is smoothed.
- Low factor (0.1-0.3): Minimal smoothing
- Medium factor (0.5-0.7): Moderate smoothing
- High factor (0.9+): Heavy smoothing, more even distribution
Visualize smoothing factor as determining how much the sharp peaks in the probability distribution get flattened out.
Smoothing Curve
Smoothing curve determines the shape of the smoothing function.
- Low curve (0.1-0.3): Less aggressive smoothing of high-probability tokens
- Medium curve (0.5): Balanced approach
- High curve (0.7-0.9): More aggressive smoothing of high-probability tokens
Visualize smoothing curve as controlling exactly how the peaks and valleys of the probability distribution are reshaped.
XTC (Exclude Top Choices) Parameters
Threshold
XTC threshold sets the cutoff for which tokens are considered "top choices."
- Low threshold (0.1-0.3): Only excludes the very top tokens
- Medium threshold (0.5): Moderate exclusion
- High threshold (0.7+): Excludes many of the top tokens
Visualize threshold as determining how many of the most obvious word choices get temporarily set aside.
Probability
XTC probability controls how often the top choices are excluded.
- Low probability (0.1-0.3): Rarely excludes top choices
- Medium probability (0.5): Excludes top choices about half the time
- High probability (0.7-0.9): Frequently excludes top choices
Visualize probability as determining how often the model is forced to think "outside the box" rather than selecting the most obvious words.
DRY (Dynamic Repetition Penalty) Parameters
Multiplier
DRY multiplier controls how strongly repetition is penalized.
- Low multiplier (1.0-1.2): Minimal repetition control
- Medium multiplier (1.5-2.0): Balanced approach
- High multiplier (3.0+): Aggressive repetition prevention
Visualize multiplier as controlling the strength of the "tax" on repeated words.
Base
DRY base controls how the repetition penalty scales for less frequent repetitions.
- Low base (0.1-0.3): Penalties grow slowly
- Medium base (0.5-0.7): Balanced growth
- High base (0.9+): Penalties grow quickly
Visualize base as determining how quickly the "tax rate" increases for words that are used multiple times.
Allowed Length
DRY allowed length determines how many tokens can repeat without penalty.
- Low allowed length (1-2): Almost all repetition is penalized
- Medium allowed length (3-5): Common phrases allowed
- High allowed length (10+): Longer repeated phrases allowed
Visualize allowed length as determining how many words in a row can be repeated before the penalty kicks in.
Penalty Range
DRY penalty range controls how far back the repetition check goes.
- Low range (128-256 tokens): Only checks recent text
- Medium range (512-1024 tokens): Balanced approach
- High range (2048+ tokens): Checks far back in the text
Visualize penalty range as how much of the previously generated text the model considers when looking for repetitions.
Mirostat Parameters
Mode
Mirostat mode selects between different versions of the Mirostat algorithm.
- Mode 0: Disabled
- Mode 1: Mirostat (original)
- Mode 2: Mirostat 2.0 (improved version)
Visualize mode as selecting which version of the adaptive sampling algorithm to use.
Tau
Tau sets the target surprise level for generated text.
- Low tau (1-3): More predictable text
- Medium tau (5-7): Balanced approach
- High tau (9+): More surprising text
Visualize tau as setting how surprising or creative the model should be, similar to temperature but with dynamic adjustment.
Eta
Eta controls how quickly Mirostat adjusts to reach the target surprise level.
- Low eta (0.1-0.3): Slow adjustment
- Medium eta (0.4-0.6): Moderate adjustment speed
- High eta (0.7+): Fast adjustment
Visualize eta as determining how quickly the model corrects itself if it's being too predictable or too random.
Practical Tips
- For factual, information-heavy text: Use lower temperature, higher top-p, and moderate repetition penalties.
- For creative writing: Use higher temperature, moderate top-p, and lower repetition penalties.
- For balanced outputs: Start with temperature 0.7, top-p 0.9, and repetition penalty 1.1.
- For troubleshooting repetition issues: Adjust repetition penalty first, then try DRY parameters if available.
- For controlling output diversity: Temperature and top-p are your primary tools, with top-k as a backup.