AFTR: Another Fine-Tuning Rentry
My notes on the current state of finetuning Stable Diffusion (particularly SDXL) Written by Horace Wimp Updated 2023-10-31
Important resources:
Kohya's sd-scripts
https://github.com/kohya-ss/sd-scripts
This repository contains non-graphical training, generation and utility scripts for Stable Diffusion. Kohya's sd-scripts has been one of the gold standard implementations for finetuning SD. Most new training methods or features are eventually added to this repo, and thus it often acts as a "core" for various training GUIs. The most officially supported GUI for sd-scripts is https://github.com/bmaltais/kohya_ss.
The State of SDXL Fine-Tuning:
Okay, let's break this down. There haven't been many decent resources for learning how to fine-tune SDXL, and even fewer of those resources mention the up-to-date state of this technology. Most guides (if you can call them that) are a vague collection of personal anecdotes and scattered notes. I want to keep things pretty short and sweet in this rentry so I'm just gonna jump straight into this with the assumption that the reader understands the basics of Stable Diffusion LoRA training.
Name | Acronym | Notes | Caveats |
---|---|---|---|
LoRA (Low-Rank Adaptation) | LoRA | The most widely-used finetuning technique. Suitable for general use. | Not parameter-efficient, unnecessarily large SDXL models at higher dim. |
LoRA for Convolution Network | LoCon | An extension of the original LoRA concept to include convolutional layers. Made by KohakuBlueleaf, who later expanded the concept into the LyCORIS project. Suitable for general use. | Same issues as LoRA above, often slower to train and hit-or-miss depending on what you are training. |
LoRA with Hadamard/Kronecker | LoHA/LoKR | Less common parameter-efficient LoRA variations. Suitable for simpler characters. Part of LyCORIS. | These are unconventional variations on LoRA and are finicky to train depending on LR/dim, High VRAM requirements for training. |
Dynamic Search-free LoRA | DyLoRA | Finds rank on its own. | Very slow to train, high dampening. Part of LyCORIS. |
(IA)^3 | iA3 | Requires high learning rates (8-10x the usual). Extremely small output sizes (1MB for an SDXL iA3). Part of LyCORIS. | Not extremely portable across models, has some flexibility issues and REQUIRES GOOD PROMPTING! |
Orthogonal Finetuning | OFT | Extremely new method, preserves model knowledge and hypothetically will not burn up while training. No additional inference cost. sd-scripts has added support for OFT training in the dev branch. | Requires additional testing, may not express new concepts as well as other finetuning methods. Though, OFTs are a potential GODSEND for style training. Unsupported in most SD UIs (WIP support in https://github.com/vladmandic/automatic dev branch) |
LoRA Training Settings:
- Train batch size: (1-2)
- The number of images that get processed at once while training
- Increasing this leads to more VRAM usage, and a more "smoothed" out learning process
- If too many images get processed at once with a high LR, your model can become confused and fail to learn the concepts you want it to learn
- Set batch size to 1 if you have a dataset smaller than 100-200 images. If you have a few hundred images batch size 2 should be alright. If you dataset is larger than 1000+ images you can try batch size 3-4 or higher.
- NOTE: This is just a rough guideline. If you have experimented and fine-tuned your training settings, you can turn this up as high as your VRAM allows. Theoretically, larger batch sizes should make the LoRA converge on the best solution over time.
- Epochs
- Up to you, depends on the dataset and your training settings. Experiment to find out where your settings start overfitting the model.
- LoRAs have a nasty habit of burning up at high step counts, to avoid this, use smaller LRs and/or larger batch sizes
- Number of CPU threads per core: 2-4
- 2 is the default, seems to work fine for me.
- Mixed precision
- If you have a 30xx-40xx series GPU, you can set this to BF16 for a modest performance improvement. This setting changes what datatype is used for mixed-precision calculations. If you have a less modern GPU you can leave this on FP16.
- Cache latents: Enable
- Enable Cache latents to speed up training slightly and reduce VRAM usage. May have a small quality impact?
- This setting saves all the latents (basically the model's representation of the training images) into your system RAM. If Cache latents to disk is enabled, then it will save them to your training dataset folder as .npz files (at least with kohya's sd-scripts)
- Cache latents to disk also allows you to restart training faster at the same resolution. However, if you change your dataset at all between training runs, be sure to recalculate/delete the old cached latents. Most training UIs do this automatically.
- Optimizer
- The Optimizer is the core of the training process.
- It seeks to minimize the difference (loss) between what the model will be able to generate, and your training images.
- To do this, it takes small steps, where each step modifies our LoRA model very slightly.
- Each step moves towards the Optimizer's best guess of how to make our LoRA model better at generating what we want.
- Different Optimizers have different strategies on deciding how big of a step to take at any given moment, and some work better than others.
- Some Optimizers are Adaptive, meaning that they can pick out their own learning rate, using some complex math they can understand when to speed up if they are not learning enough. However, most of the original available optimizers are NOT adaptive.
- The current state-of-the-art adaptive optimizer is Prodigy, but it has a high VRAM requirement
- A lower VRAM adaptive optimizer is ADAFACTOR, however I have not experimented with this one enough to comment on it's effectiveness.
(These are my own notes on different methods, purely from experimentation. I could be wrong about these, but these are just my observations.)
LR Scheduler:
- Basically this is a part of the training process that modulates your learning rate over the course of training.
- Full Name: Learning Rate Scheduler
Constant:
- Just like it sounds, a constant LR multiplier of 1.0. This changes nothing about the learning rate while training.
Constant with Warmup:
- Same as constant, but with a warmup period. This means that the initial learning rate starts at 0, and warms up to 1.0 linearly over the course of the warmup period. The warmup can be set as a percentage of all steps, or as a constant number of steps from the start.
Cosine:
- Starts the learning rate at 1, but gradually modulates it down in a smooth cosine-like curve to 0.
- Ideally, you want the LR to slow down as the training process goes on, because it allows the model to focus on capturing fine details, however sometimes Cosine is overkill, as for half of the training process, you are learning at a multiplier of less than 0.5x.
- That gradual slowdown to an LR of 0 may waste some steps of your overall training with an extremely low LR.
Cosine with Restarts:
- Same as cosine, except it restarts a certain number of times while training. Most of the time this is done to make the LoRA more robust, and fix some of the issues with low LR in the latter half of training with cosine.
- A potential problem with restarts is that depending on the implementation, when the restart happens it can make your learning rate spike massively! Going from an LR of 0 to 1 in a single step can be catastrophic for LoRA training, and it can pretty much fry the LoRA right away. If you notice your loss has spiked massively after an LR restart, this is likely the problem. My advice would be to lower the learning rate or try a different scheduler.