Good
| LoRA | Setting1 => | Setting2 (1, char.) | Setting3 |
|---|---|---|---|
| dim | 128 | 128 | 128 |
| alpha | 1 | 1 | 64 |
| scheduler | cosine with restarts | cosine with restarts | --- |
| cosine restarts | 5 or 10 | 5 | --- |
| warmup ratio | 0.1 | 0.1 | --- |
| learning rate | 3e-3 or 4e-3 | 4e-3 | 6e-5 |
| text encoder | 8e-5 | 8e-5 | 3e-5 |
| unet lr | 3e-3 or 4e-3 | 4e-3 | 16e-5 or 15e-5 |
| num_workers | --- | 1 | --- |
| batch size | 3 | 3 | 2 |
| ag_dropout | --- | 0.1 | --- |
| caption_dropout | --- | 0.03 | --- |
| res | 512 | --- | 960 |
| Images | --- | --- | --- |
| epoch | --- | --- | --- |
| repeats by image | --- | --- | --- |
| Preview | MEGA | image Used for characters. With very good results | image |
Anon1's note
UNCONFIRMED: multiply 1e-4 by your batch_size to get your unet_lr set learning_rate equal to unet_lr multiply 1e-5 by your batch_size to get your text_encoder_lr If this math scares you, just type it into google or w/e to get the answer EX: 1e-4 * 12
Anon2's note
From my experience as I train lora for a while I feel like any parameter value will be okay as long as is not absurd as the priorities are: 1. your dataset 2. your tags 3. your parameter setting
Not as good
| LoRA | Setting2 |
|---|---|
| dim | 128 |
| alpha | 128 |
| scheduler | constant |
| cosine restarts | --- |
| warmup ratio | --- |
| learning rate | 1e-4 |
| text encoder | 5e-5 |
| unet lr | 1e-4 |
| num_workers | --- |
| batch size | 2 |
| ag_dropout | --- |
| caption_dropout | --- |
| res | 512 |
| Images | 35 |
| epoch | 1 |
| repeats by image | 100 |
| Preview | Anus Image Apparently not satisfactory. |