DAdaptation: The best LORA optimizer for lazy people

Tired of searching for the best learning rate and guessing cosine repeats? Have Dadaptation do it for you.
Dadaptation (D-Adaptation, D is short for Distance) is an adaptive algorithm applied on top of an existing learning optimizer and will automatically find the ideal learning rate for your dataset. For the Kohya trainer, the Dadaptation option applies on top of the beloved AdamW.

TLDR, Use these settings:

LR scheduler - constant with warmup
% of warmup: 10
Learning rate - 1.0
TE LR - 0.5
Unet LR - 1.0
--optimizer args "decouple=True" "weight_decay=0.02"
DIM size and Alpha must be the same 1:1

Here are the Pros for Dadaptation:

-Ideal LRs found automatically by optimizer
-Earlier epochs are usable compared to AdamW
-Use only one setting for all your Loras

Here are some drawbacks:

-Training speed is slower than AdamW
-higher risk of overtraining at earlier epochs
-No improvement if you have already found your favorite AdamW settings
-higher vram usage

What the settings mean

Dadaptation works differently than the other optimizers. Instead of absolute values for LR, it uses ratios. We set the Learning Rate to 1.0, or 100% normal speed. If you set LR to 0.5, that would be half-speed of optimum.

For the Text Encoder LR, since we normally set it to half speed of the Unet with AdamW, set the Text Encoder LR to 0.5

Unet LR is supposedly unused, since Dadaptation uses the Learning Rate value to adjust Unet speed. But to be safe, we set Unet LR to the same as our LR, or 1.0.

You can check your "real" LR with Tensorboard. The lr/d*lr value is the LR that Dadaptation settled on.

We set the scheduler typically to constant because we do not want our LR ratios to fluctuate during training, like with cosine. I like to dampen the training rate at the start, so I use constant with warmup with 5-10% warmup steps.

You must use the same Network Alpha as your DIM. If your alpha is smaller than dim, it will increase learning rate instead of dampening it.
This means that setting dim 128/alpha 1 will multiply LR 128x and cause NaN error.

Extra optimizer arguments are required for Dadaptation to work properly. Add --optimizer args "decouple=True" "weight_decay=0.02"

Decouple separates the TE and Unet LRs so they learn at different rates like AdamW.

Weight decay is typically used to prevent overfitting. However, in my testing, I have not seen a significant change between weight decay 0.001-0.6.

Default weight decay for AdamW is 0.01. For 2-4x more decay that Facebook recommends in their Dadaptation whitepaper, we set between 0.02-0.04 weight decay.

Smaller weight decay might be leading to faster baking of details. In this case, smaller weight decay can be desirable for replication of details, like characters and objects, and higher weight decay for concepts and styles. But more testing is needed to prove this. For now, I use weight decay 0.02

Edit Report
Pub: 11 Apr 2023 02:42 UTC
Edit: 11 Apr 2023 05:27 UTC
Views: 2623