How ComfyUI-GGUF Loads and Uses Q8 T5 Text Encoders
Overview
This document explains how ComfyUI-GGUF implements GGUF text encoder loading and how our test accurately simulates it.
ComfyUI-GGUF Implementation
1. Loading Phase (loader.py)
2. Forward Pass (ops.py)
When the T5 model processes text:
3. Dequantization (dequant.py)
For Q8_0 specifically:
The Complete Flow in a Flux/HunyuanVideo Pipeline
Why Q8 Has Quality Loss
The Quantization Process
- Original FP16 weight:
0.023456789(full precision) - Find block scale:
max(abs(block_weights)) - Quantize to int8:
round(0.023456789 / scale * 127)- Result: integer in range [-127, 127]
- Precision loss: ~0.78% per weight (127 levels vs continuous)
- Store: int8 + scale (FP16)
- Dequantize:
int8 * scale- Result:
0.023437500(different from original!)
- Result:
- Error:
0.023456789 - 0.023437500 = 0.000019289
Accumulated Errors
- T5-XXL has ~4.7 billion parameters
- Each quantized weight has small rounding error
- Errors accumulate through 24 transformer layers
- Final embeddings have measurable quality degradation
Our Test Methodology
What We Do
Why This Is Accurate
- Same quantization formula: We use the exact Q8_0 formula from ComfyUI-GGUF
- Same block size: 32 values per block
- Same scale type: FP16 (torch.float16)
- Same dequantization:
int8 * scale - Same final weights: The dequantized weights are identical to what ComfyUI-GGUF uses
What We Measure
- Cosine similarity: How similar are embeddings?
- MSE/RMSE: Root mean squared error
- MAE: Mean absolute error
- SNR: Signal-to-noise ratio
- Relative error: Error as % of magnitude
Results Summary
FP16 Fast (TF32 accumulation)
- Quality: 99.99999% similar to baseline (essentially identical)
- Speed: 16% faster than baseline
- VRAM: Same as baseline (10.76 GB)
- Artifacts: None - still FP16 precision
Q8 GGUF
- Quality: 99.96% similar to baseline (0.035% worse than FP16 Fast)
- Speed: Slightly faster than FP16 Fast (~1%)
- VRAM: 50% less (4.4 GB vs 10.76 GB)
- Artifacts: Permanent quantization rounding errors
Key Metrics
| Metric | FP16 Fast | Q8 GGUF | Winner |
|---|---|---|---|
| Cosine Similarity | 0.99999946 | 0.99964816 | FP16 Fast |
| MSE | 0.000000e+00 | 1.490116e-06 | FP16 Fast |
| MAE | 3.43e-05 | 8.27e-04 | FP16 Fast (24x better) |
| SNR | ∞ dB | 31.48 dB | FP16 Fast |
| VRAM | 10.76 GB | 4.44 GB | Q8 GGUF |
Conclusion
FP16 Fast Should Always Be Enabled
FP16 + Fast Accumulation (TF32) is:
- ✅ Free performance boost (16% faster)
- ✅ Zero quality loss (99.99999% similar)
- ✅ No artifacts
- ✅ Native hardware support
- ✅ Just one line of code:
torch.set_float32_matmul_precision('high')
Q8 GGUF Is Only Worth It For VRAM-Limited Systems
Q8 GGUF should only be used if:
- ❌ You literally cannot fit FP16 in VRAM
- ❌ You're willing to accept 0.035% quality loss
- ❌ You're okay with 24x more error than FP16 Fast
- ❌ You don't mind slightly worse prompt adherence
For Modern GPUs (RTX 4090, H100, etc.)
Always use FP16 + Fast Accumulation. The quality is better and you have enough VRAM.
References
- ComfyUI-GGUF Repository: https://github.com/city96/ComfyUI-GGUF
- GGUF Specification: https://github.com/ggerganov/llama.cpp
- Our Test Script:
comprehensive_t5_evaluation.py