Hentai Diffusion General icon /hdg/ Logo Imgur (3 Sizes)

LoRA Training Guide

Written by StyleAnon with some help from a few others and the Thread! Links to other Collaboration Edition Guides/Resources PromptAssist | LoRA Repo



What is a LoRA?

From: https://github.com/cloneofsimo/lora

  • Fine-tune Stable diffusion models twice as fast compared to the dreambooth method, by Low-rank Adaptation

LoRAs are small(er) than 2GB+ dreambooth models, can be permanently fused to a model or dynamically loaded like hypernetworks.
They train fast, you can reliably replicate an art style in about 20 minutes.


Using LoRAs

For dynamically loading:
Install this into WebUI https://github.com/kohya-ss/sd-webui-additional-networks
Find the new UI options at the bottom marked "Additional Networks" in txt2img or img2img.
Check the enable checkbox select a safetensors LoRA and generate an image.

  • A dropdown selection is now provided for LoRAs, provided that they are placed in the correct directory.
  • For example: X:\SD\voldy\extensions\sd-webui-additional-networks\models\lora, where models\lora may need to be created.

For permanently fusing to a model:
Install this into WebUI: https://github.com/d8ahazard/sd_dreambooth_extension
Find the new UI options in a new Dreambooth tab.


Training LoRAs

  1. Install this: https://github.com/kohya-ss/sd-scripts.
  2. Grab an existing training script, for example: https://mega.nz/file/900TgBgJ#RBd2ofleyHobh5MeelCjcesZdQVAoJ25LMXiF67t5Qg (DEAD LINK)
  3. Dump that into your sd-scripts install folder and configure it, you probably only need to change:

    • $ckpt - usually your NAI animefull-final-pruned.ckpt
      • If you have a .safetensors, this appears to work just fine, it appears to make training faster?
    • $image_dir - points to a folder of folders, notes in the script
    • $reg_dir - regularisation images, point this at any empty folder unless you know what you're doing.
    • $output - where safetensor files are produced
    • $train_batch_size - configure according to your VRAM, 3070 8GB cards can train at Batch Size 3 without crashing.
  4. Create a directory layout as shown below:
    • Example directory layout: https://mega.nz/folder/p5d3haJR#SmDSpaldBGcYzvZOx8sqbg
    • You could have one concept subfolder or 10 but you must have at least one.
    • Concept folders follow this format: <number>_<name>
      • the <number> determines the amount of repeats your training script will do on that folder.
      • the <name> is purely cosmetic as long as you have matching txt caption files.
      • Caption files are mandatory or else Loras will train using the concept name as a caption.
      • See TODO: EDIT THIS LINK LATER for how to caption files.
        Directory Example
  5. Optional: Configuring your training script:
    • $num_epochs - more is not necessarily better, but 4-8 is sensible depending on images/repeats/batch size
    • $save_every_n_epochs - purely preference, how often does a safetensors file pop out, larger is less often
    • $network_dim - how large is the lora, 4 can work but larger is higher fidelity/filesize. Most people use 128.
      Network Dimension

Captions

Generating Caption Files
  • Caption file generation can be automated using the WD1.4 Tagger extension in WebUI.
  • Alternatively, you can do it by scraping from boorus along with images.
  • It's up to you which method you want to use, but automated tagging has become extremely accurate with WD1.4 Tagger, and it won't append metadata tags like Translation Request that you want to remove later.
Mass Editing Captions
  • To facilitate this process you can use https://github.com/starik222/BooruDatasetTagManager
  • This tool can load folders of captioned images and allows you to edit them individually or in batch. The batch part of the UI allows you to check "common tags", tags that appear across all images.
Pruning Captions
  • If you're training for an artist's style, you'd want to remove that artist's name from captions. If you were training a character, you could remove the character name or series. In this way, the style or character becomes implicit to other tags.
  • It's become common in TI training to remove all features common to a character, like outfit or hair style. While this allows you to recall those features quite well it leads to overfitting.
  • The general rule applies: strip out any tags that should be implicit to all generated results or any tags that would stray from your training data (like series name, artist_name, english_commentary, translation_request, etc.).
  • Training will generally replace one tag's result with another. Consider training against many pictures of a character wearing a red_dress.Over time your training will guide the tag to closer resemble the red_dress in your images instead of the base model. If this behaviour is undesirable it is possible to replace the tag with some other tag, for example zyxdress but results are divisive, more results are required to determine whether or not this helps.

Instructions for Running the Training Script

Not Guaranteed to Work on all Systems

  1. Open an administrator PowerShell window
  2. Type Set-ExecutionPolicy Unrestricted and answer A
  3. Close the admin PowerShell window
  4. Install SD-Scripts, instructions are below and on the repo.
    • git clone https://github.com/kohya-ss/sd-scripts.git
      cd sd-scripts
      
      python -m venv --system-site-packages venv
      .\venv\Scripts\activate
      
      pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
      pip install --upgrade -r requirements.txt
      pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
      
      cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytescp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
      cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
      
      accelerate config
      
  5. Once you get the acceleration prompt, answer as follows:
    • This machine
    • No distributed training
    • NO
    • NO
    • NO
    • 0
    • FP16
  6. Once that's good to go, let's see if it works. Throw this training_command.ps1 in your sd-scripts folder. Edit the directories for it with notepad++ or any script editor.
    • $ckpt can be any SD model checkpoint/safetensors file, ie: animefull-final-pruned.ckpt
    • $image_dir is described above.
    • $reg_dir would contain regularization images, or it can be empty, but it must exist.
    • $output is wherever you want your safetensors LoRA to be produced, for example a new folder in your SD install
  7. Now, drop in your images + captions into the training folder(s).
    • You can generate caption files automatically using WD1.4 Tagger.
      • Doing it this way is preferable since you get the least amount of useless tags you want to prune, and machine tagging is more consistent at noticing details than humans are.
      • You can still add more captions later if you want to.
    • You can get with an image scraper such as Grabber
    • Pruning tags can be done with BooruDatasetTagManager
  8. Running the script in the PowerShell terminal is a matter of ensuring that you're in the correct folder (you'll see the current directory in the terminal prompt, ie: PS C:\Users\Administrator>, make sure you cd into your sd-scripts folder then, check if you can see the script with ls. If you can, run it with the following: .\whatever_my_script_is_called.ps1

If you start getting errors, it would either mean something is missing called libcudart.so. The fix for this was to take the bitsandbytes-windows files, and instead of dragging them into venv\Lib\site-packages\bitsandbytes, drag them into AppData\Local\Programs\Python\Python310\Lib\site-packages\bitsandbytes (your local python folder) Also move over the main.py into the cuda setup folder located in that folder too. From that point now, everything should be working.

SOURCE re-written a tad to fix some mistyping, formatting and to simplify.


Diffing two models

It's possible to approximate a LoRA from the difference between two existing models.
This requires the sd-scripts repo to be installed.
You can find a script to run this process here.
Configure the input models, output location and network dimension as required.
This process appears to generate a legacy .ckpt file, which can be converted with the following addon.
This addon is readily available from within the WebUI as sd-webui-model-converter.
To use it with your .ckpt LoRA, move the .ckpt file to your models\Stable-diffusion folder, refresh the model list from within the converter tab:

lora convert

After this conversion, a .safetensors file is saved to your models\Stable-diffusion folder which can then be moved to a more convenient place.


Saving and resuming training

This requires editing the training command script.
Instructions can be found here.
State data is saved out depending on $save_every_n_epochs, due to the large file size it is recommended to do this infrequently.
If you have high VRAM and can complete training without saving, this is preferred.


Collab Instructions

The above link should basically do everything for you. Rentry Write-Up by a Anon


Common Issues and Solutions

Click the above link for troubleshooting common issues.


Notes:

  1. Images do not need to be cropped or manipulated at all, set your target training resolution in the script, default 512.
    -Aspect ratio bucketing is enabled by default.
  2. --medvram fix has been implemented by Kohya, you don't need to do anything besides update now.
Edit Report
Pub: 11 Jan 2023 09:14 UTC
Edit: 11 Jan 2023 10:16 UTC
Views: 1572