This is a simplified version of this article: https://rentry.org/zimage-base

Z-Image Base LoRA Training Guide

This guide summarizes community findings for training LoRAs on the Z-Image Base (6B) model. While the "Turbo" version is aster for generating images, the Base model is the superior choice for training because it hasn't been distilled, allowing it o learn new concepts more effectively.

Base vs. Turbo

  • The Base Model: This is the "full" model. It supports standard CFG (Classifier-Free Guidance) and Negative Prompts. raining on this yields more flexible LoRAs.
  • The Turbo Model: Optimized for 8-step generation. It is difficult to train on directly without specialized "distillation" echniques.
  • Cross-Compatibility: LoRAs trained on the Base model will work on Turbo, though you may need to increase the LoRA trength (e.g., to 1.2 or 1.5) to see the same results.

If you are using tools like AI-Toolkit (Ostris) or OneTrainer, these are the current "best practice" parameters:

Parameter Suggested Value Note
Learning Rate 1e-4 (0.0001) Lower to 5e-5 if the model overfits too quickly.
Optimizer AdamW8Bit Prodigy is also a great "set and forget" alternative.
Rank / Alpha 16 or 32 Use 64 only for complex subjects like specific faces.
Resolution 1024 x 1024 The model is native to 1024px; use bucketing for other ratios.
Batch Size 1 to 4 Depends on your VRAM availability.

Dataset Preparation

  • Image Count: 20–50 high-quality images are usually sufficient for a character or style.
  • Captions: Use descriptive tags or natural language. Including a unique "trigger word" (like ohwx) helps the model isolate our concept.
  • Diversity: Ensure your dataset has different backgrounds and lighting so the model doesn't "bake" those into your subject.

Fedora & Hardware Considerations

Since you are running Fedora, you are likely using ComfyUI or local CLI training scripts:

  • VRAM Requirements: You need at least 16GB of VRAM for 1024px training. If you have 24GB, you can train without using ggressive memory-saving optimizations.
  • Optimization: Use bf16 (Brain Floating Point) if your hardware supports it (RTX 30-series and up) to improve training tability and speed.
  • Dependencies: Ensure your python-venv is properly set up with the latest torch and cuda libraries compatible with our Fedora kernel.

Testing Your LoRA

Always test your LoRA on the Base model first to verify the likeness:

  • Steps: 30–50 steps.
  • CFG Scale: 3.5 – 5.0.
  • Sampler: Euler or DPM++ SDE.

If the LoRA works well on Base but looks "weak" on Turbo, simply turn up the LoRA weight in your Turbo workflow.

Edit

Pub: 01 Feb 2026 21:35 UTC

Views: 51