Efficient Dreambooth Training Guide

Please note that this is not a guide on Stable Diffusion, setting up the webui, or installing Dreambooth, but only an optimization guide as well as details and results on training a large dataset (for Dreambooth standards). This assumes you have some knowledge on the technology behind SD and DB.

Standard Setup - Dreambooth UI Settings (24GB)

  • 100 Steps Per Image (Epochs)
    • Can get away with 80 or less for larger datasets as I have created a 210k step model with 77 steps per image with approx. 2.7k images
  • Save model frequency (Epochs)
    • Up to you, check "Generate a .ckpt file when saving during training" if you actually want a checkpoint saved and not in the diffusers model only
  • Save Preview(s) Frequency (Epochs)
    • Set this to your max epochs (100 in the example case) as it's buggy and I've experienced crashes with it enabled with bucketing.
  • 1e-6 LR with 500 step warmup. Scaling to 1/10th of your image count seems to work well.
    • In theory, for 50 images use 500, for 10 use 100, etc.
    • 2e-6 can work as well, if you go higher, lower your LR over time. If you are not familiar with doing this, I would stick to a constant schedule
    • Have not tested other LR schedulers in depth to give input
  • Amount of time to pause between epochs, in seconds: 2
    • This defaults at 60s which seems unnecessary and will slow down your training
  • EMA
    • Improves model quality
  • Train Text Encoder
    • Newest version has a scale, keep it at 1
  • Pad Tokens off
    • I don't see the purpose of limiting your tokens length, even though the logic is "You probably want to do this in the GUI" (???)
  • Shuffle Tags on/of - preference (if tagging/captioning)
    • Shuffling doesn't seem to be a good idea for a character. I had significantly varied results, ie. Hair color, clothing, accessories that lost the original "look" of the character.
    • It might be useful for an art style. More on this will be covered in the real world training example.
  • Image Resolution
    • Default is 512
    • If you already cropped your images to 512x512 (or any higher 1:1 resolution) it will train without bucketing, otherwise the resolution you set will bucket the images within the max you set.
    • I would not go over 1024 as training time scales with resolution
  • Horizontal Flip disabled
    • Havd heard mixed reports about enabling this
  • Everything else default

Webui-user CLI Parameters / Info

  • Standard
    • --opt-split-attention
  • Reduce VRAM
    • --xformers
  • medvram/lowvram do not seem to have any effect on dreambooth training, only inference (generating images)
  • If you are going to be doing a lot of training, I would disable/remove extensions that you don't use/aren't critical as they make gradio heavier and more prone to GUI freezing

Optimization Parameters for 12-16gb

  • If you have a 24GB card, you won't need to worry about this, but if you want to use your computer while training, it might be worth the extra time it takes to complete a model
  • Enable xformers - this alone will get you to ~14GB VRAM with the "Standard Setup" - the following can reduce your VRAM usage further
    • Use bf16 or fp16 (bf16 only supported on RTX cards and specific datacenter cards)
  • Cache Latents Disabled
    • saves VRAM at the cost of some speed, fine to disable
  • Disable EMA
    • This improves model quality so you might want to leave it enabled if you can spare the memory
  • Disable Train Text Encoder
    • NOT recommended to disable this provides a substantial hit to quality, you are better off renting compute
  • Enable 8 bit Adam
    • NOT recommended as the training is very slow and does not produce images anything like fp32, bf16 or fp16 alone
    • I have only used it once, so I'm not sure if you just need to train it more and lower the LR to get better results, but for as slow as it was, I won't be doing any more tests. Just rent compute if you have to use this to run Dreambooth locally.

(My) Tagging/Captioning Workflow

  • Comma delimited text file tags from multiple boorus matching the file name
    • First instance of the artist is put in the first index
    • All instances of character tags follow
    • Remaining tags are sorted alphabetically
    • [artist], [character(s)…], [tags…]
    • I have a script to export from an image downloader/tagger/organizer application called Hydrus and can provide the conversion script and an importer for AI generated images is located here: official-elinas/hydrus-auto-api (github.com)
    • The way I did this is optional, you don't have to have to tag/caption at all
    • You can also enable shuffle tags, though as mentioned before, not for a character, but maybe an object, concept, artstyle, multiple characters, or all of the combined

Training a Large Model - Initial Setup

  • Dataset consisted of approximately 2700 images
  • This was executed on runpod.io and cost about $30 to train in total. It would have been less if not for bugs causing training to stop when checkpoints were generated with the version of the dreambooth extension being used at the time.
  • Comma delimited text file tags from multiple boorus (Hydrus export) - see hydrusnetwork/hydrus: A personal booru (github.com) for more info
    • First instance of the artist is put in the first index
    • All instances of character tags follow
    • Remaining tags are sorted alphabetically
    • [artist], [character(s)…], [tags…]

Dreambooth Extension Parameters

  • 1e-6 LR (last 30k 2e-6) - This was done backwards, the final training should have been around 5e-7
  • Originally 38 Steps (Epochs) per image - [auto generated based on initial 100k step input which no longer exists in the latest version of the dreambooth extension]
    • The initial target was 100k steps but ultimately was trained to 210k steps which ended up being ~77 epochs per image
  • Amount of time to pause between epochs, in seconds: 2
    • If left at the default 60s, significant time will be wasted [60s * 100 epochs = 1.7 hours of pausing]
  • 500 LR Warmup steps
    • This was a new parameter at the time the extension was updated and should have been increased to 1/10th the total epochs as mentioned before based on experimentation afterword
  • 100k steps initial target, upped to 210k in the end to try to finetune details
  • EMA
  • Train Text Encoder
  • Pad Tokens off Horizontal Flip off (more on this below)
  • Everything else default

Using Runpod

Modification to relauncher.py to pipe logs and tail them, otherwise no training logs will be present and the webui will be the only source:
python webui.py --port 3000 --xformers --ckpt /workspace/stable-diffusion-webui/v1-5-pruned-emaonly.ckpt --opt-split-attention --listen --enable-insecure-extension-access 2> /workspace/sd_logs
Note: Only use xformers if you are using <24GB card. Runpod might have fixed the logging after this bug was reported.

Resulting model - 210k steps

Pros

  • Generates artists' styles well that are heavily represented in the dataset
  • Follows included and non-trained tags well from the base model
  • Detailed backgrounds behind subject when prompted
  • Merges with other models decently

Cons

  • Artists and tags not frequent in the dataset are weighted weakly and don't generate as expected
  • Hands are not good in general, especially for a specific artist [bad prompt v2 / bad-artist embeddings fix this to an extent]
  • Use of some trained tags is required to get good results (might be a pro in some cases as it allows you to curate images with heavier weighted tags), extracting the tags from the txt files and getting their frequencies will help

Metrics

  • A script was written to parse the files and get artist and tag frequency. Characters are not possible to quantify due to varying length and lack of context behind what the tag represents after processed. Would have to count before processing the tags in the dreambooth accepted format which will not be done right now.
    • Artists
      • 11 artists with > 50 instances
      • 15 artists between 10 and 49 instances
      • 273 artists < 10 instances
      • Not all are artists as some artist tags were not found
    • Tags
      • 31 tags with > 1000 instances (5 tags > 2000 instances)
        • With the top 10 frequencies being 426, 338, 212, 204, 172,167,144,128, 74, 71
      • 47 tags between 500 and 999 instances
      • 83 tags between 250 and 499 instances
      • 221 tags between 100 and 249 instances
      • 268 tags between 50 and 99 instances
      • 1354 tags between 10 and 49 instances
      • 3576 tags between 2 and 9 instances
      • 3084 tags with 1 instance

Takeaways

  • Dreambooth specs - The common recommendation is around 100 steps/epochs per image, but the model began producing good results around 70k steps. The belief that 100 steps are needed might not apply for larger datasets (or even smaller ones) as another model was trained in the past with ~250 images for 12k steps with very good results.
  • Tag weight - This needs to be revisited in the future. Is it a good idea to place the artist and characters in front for heavier weighing, or should all tags be randomized? Though, even with randomization, there will still be instances of underrepresented artists which leads to the belief that this might not help very much.
  • Alternative ideas
    • Weight artists better, trash images that are lacking in samples, don't overweigh 5-10 artists while only having the rest appear only a few times.
    • Imageboard Scoring - Use imageboard scores which can be scraped from the sites themselves as a quality indicator. Currently in the process of doing this uding a modified Hydrus Gelbooru and a few other parsers.
  • Image choice - The dataset was extremely varied in style. The inputs nor outputs (in general) do not resemble NAI or many other common anime models that have a consistent and "curated" look which is frankly boring, for lack of better words. This is a mixed issue as it will require more training to flesh out the styles (specifically hands…) versus sticking to one art style or one character with defining features.
  • Base model choice - NAI was used as the base model due to previous consistency in the 10s of Dreambooth models created in the past. Anything V3 was used as a base model with good results. This along with Elysium produce consistent and fairly accurate outputs, including better hands than NAI. Another option is SD 2.1 ver. 768x768 (continued below)
  • Image Size - The ~2700 images were "focal point cropped" to a size of 512x512, which uses an algorithm to approximate the subject of the image. The standard method is letterboxing to 512x512 which does nothing but simple crop and often leads to the subject's head and other important features cut off or missing from the image.
  • Image technique - As mentioned, "focal point cropping" was used, but alternatives exist. Simply cropping the images to a larger resolution provides more clarity in the output. This has been tested with the aforementioned 12k 250 image model which was trained at native 1024x1024.
    • Bucketing - NovelAI released a technique to "bucket" images so they fit in a space of 768x768, 1024x1024…etc and resize the image by a resolution multiple of 64 to avoid cropping entirely. A model was trained on AnythingV3 with 60 images and it produced good results, though not using their bucketing implementation, but a simpler one that accomplishes the same output.
    • Bucketing limitations - Training images that are double the size are VRAM heavy and require a GPU with 48 GB VRAM like an A6000. The memory usage was around 36 GB during training but may be reduced by using optimizing techniques such as 8bit Adam, xformers, and some other tweaks. No longer true with the way bucketing is handled with latents, so resolution scales upwards with time not memory.
    • Random Horizontal Flip - The Dreambooth extension was vague about the impacts on the model, but in a sense, it might have some logic to it. This might be able to provide more variety in subject orientation and reduce "noise" such as extra limbs in instances where there is an artist that has many images in similar orientations, while others have them in different. Random flipping could alleviate this issue in theory, but as it's a theory, at the time it's untested and could lead to worse model results.
  • Learning Rate - The model was trained on a 1e-6 to slow the process in hoping to improve output quality. Learning rate stagnates at some point in the process and should be decreased to finetune details.

Next Steps and Possible Improvements

  • Base model
    • NAI will likely not be used again due to the mentioned reasons above, and the following 2 models except SD 2.1 are based on NAI which are improved in general.
    • Possible contenders are Elysium V2 or SD 2.1
      • If SD 2.1 were to be used, images would be processed in 768x768 to match the trained model, whether focal point cropped or bucketed. Bucketing should apply the same way in a 768x768 space as in a 1024x1024 as they are both divisible by 64. The con is that there is a lack of anime context in the model and NSFW elements have been removed. Tests with smaller datasets might prove beneficial.
  • Image Processing
  1. Retrain at 512x512 using focal point cropping on a different model (Elysium V2)
  2. Retrain using bucketing at either 768x768 or 1024x1024 on a different model, also Elysium V2
  3. Retain at 768x768 using focal point cropping OR bucketing on SD 2.1 (most difficult but possibly most rewarding?)
    Option 2 seems like the most reasonable middle ground as far as future models go.
  • Tag Processing (Order, shuffle, etc.)
    • This is still a difficult problem to solve. CLIP weights are tricky. Order definitely matters as well as frequency.

Final Comments

  • Why train such a large Dreambooth model?
    • The Dreambooth paper recommended 2-5 images, but almost no one does that few now. Generally 20-30 is the sweet spot for a small model, 50-250 for a medium model, and past that are large models, which there have been many good examples of.
    • Dreambooth functions very similarly to fine-tuning when at scale. With the webui, it was painless to set up, minus host issues at first which were resolved.
    • For a very large dataset? Use a CLI script optimized for it. The GUI freezes sometimes, so unless you're logging to a file, you won't know where you're at.

If you have any questions on the process outlined or have suggestions specific to this document, you are welcome to contact Elinas#5898 on Discord.

Edit
Pub: 22 Dec 2022 01:18 UTC
Edit: 07 Jan 2023 19:34 UTC
Views: 1267