Understanding Sampler Load Order

The load order of LLM samplers can significantly impact how your text generation works. Each sampler interacts with and transforms the probability distribution in its own way, so their sequence matters greatly. Let me explain the logical principles behind an efficient sampler load order and recommend an approach.

The Fundamental Principle of Sampler Ordering

When arranging samplers, we need to think about the transformation they perform on the probability distribution. A good mental model is to visualize samplers as filters that progressively refine the raw probability distribution until we arrive at the final token selection.

The most logical order follows this general principle: apply broader modifications to the distribution first, then apply more targeted modifications, and finally apply your selection mechanism.

Here's a recommended load order with explanation for why each sampler belongs where it does:

  1. Temperature (first)
    Temperature fundamentally rescales the entire logit distribution and should happen first because it alters the basic "landscape" that all other samplers will work with. It determines the overall entropy level.
  2. Frequency/Presence Penalties
    These penalties adjust token probabilities based on their prior usage in the generated text. They should be applied early because they modify the base distribution according to global text properties.
  3. Repetition Penalties (standard, DRY)
    Apply repetition penalties after the global adjustments. These penalties target specific tokens that have appeared in previous text, making more local modifications to the distribution.
  4. Smoothing (Smoothing Factor/Curve)
    Smoothing helps even out irregularities in the distribution after the previous modifications. This creates a more balanced distribution before applying cutoff-based methods.
  5. Min-p
    Apply Min-p to establish your baseline threshold, eliminating very unlikely tokens that fall below your minimum probability.
  6. Distribution-shaping methods (TFS, Top-nsigma, Typical-p)
    These methods reshape the distribution based on statistical properties and should be applied after the basic thresholding but before final candidate selection.
  7. Top-p (Nucleus Sampling)
    Top-p provides a dynamic cutoff that adapts to the distribution's shape, making it an excellent intermediate filter.
  8. Top-k
    Top-k applies a hard limit on the number of candidates, further constraining the selection pool after Top-p has created a more adaptively filtered distribution.
  9. Top-a
    Top-a makes fine adjustments based on the highest probability token, acting as a refined filter near the end of the chain.
  10. XTC (Exclude Top Choices)
    XTC should be applied near the end because it works on what would otherwise be the final selection candidates, potentially removing the highest probability tokens.
  11. Mirostat (last)
    Mirostat is an adaptive sampling method that dynamically adjusts sampling based on previous outputs. Because it's adaptive and makes final decisions about entropy, it should be the last modification before token selection.

Practical Implementation Examples

Here's how this might work in a few practical scenarios:

For Creative Writing

Temperature (0.8) → Presence Penalty (0.3) → Repetition Penalty (1.1) → 
Smoothing (0.7) → Top-p (0.9) → Top-k (40) → XTC (occasional, low threshold)

This sequence allows for high creativity (temperature) while preventing repetition, then using Top-p and Top-k to maintain coherence.

For Factual Text Generation

Temperature (0.3) → Frequency Penalty (0.2) → DRY Repetition (1.5) → 
Min-p (0.05) → Typical-p (0.8) → Top-k (15)

This lower-temperature approach with typical-p helps maintain factual consistency while still allowing for natural-sounding text.

For Code Generation

Temperature (0.2) → Repetition Penalty (1.15) → Top-p (0.95) → 
Top-k (50) → Mirostat (Mode 2, Tau 5)

Code generation benefits from higher determinism with some adaptivity from Mirostat to adjust based on the complexity of different code sections.

Understanding Sampler Interactions

It's important to understand that samplers don't just operate in isolation—they interact with each other in complex ways:

  1. Temperature and Top-p/Top-k: Higher temperatures make Top-p/Top-k selection more diverse, while lower temperatures concentrate probability mass on fewer tokens even before Top-p/Top-k are applied.
  2. Repetition penalties and Frequency penalties: These can work together but might over-penalize if both are set high. The repetition penalty is typically applied to exact matches, while frequency penalties affect all instances of frequently used tokens.
  3. Min-p and Top-p: Min-p establishes a floor for token probabilities, while Top-p creates a ceiling for the cumulative distribution. Using both provides more control over both ends of the distribution.
  4. Mirostat and Temperature: If using both, keep in mind that they both control randomness. Mirostat will try to achieve its target entropy level regardless of temperature, so they may work against each other. Generally, if using Mirostat, keep temperature neutral (around 1.0) to let Mirostat do its work.

The optimal load order isn't just about theory but also about what produces the best results for your specific use case. Some experimentation will likely be necessary, but starting with this logical progression gives you a solid foundation based on how these samplers transform probability distributions.

Edit Report
Pub: 16 May 2025 13:18 UTC
Views: 130