Okay so here's what I do. I run everything locally, which works great if you have a modern NVIDIA card (I use a GTX3070). Otherwise you'll need to find instructions on a web hosted setup, I can't help with that. I train LoRAs. Apparently the hip new thing is LyCORIS but I already have a process that works and refuse to try anything new.
- First, you need this: https://github.com/AUTOMATIC1111/stable-diffusion-webui#automatic-installation-on-windows
- Then you need this: https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
- Then you need a model to generate images. I train on NovelAI and use Anything V3 for generating
- NovelAI: https://aituts.com/run-novelai-image-generator-locally/ (see the part on torrenting)
- Anything V3: https://huggingface.co/Linaqruf/anything-v3.0/tree/main/unet
- Then you need the autotagger extension for webui: https://github.com/toriato/stable-diffusion-webui-wd14-tagger
- If you want to scrape images from gelbooru get this: https://github.com/Bionus/imgbrd-grabber
That's basically it. Hopefully you already know how to use the stable diffusion webui, but if not you can google plenty of basic guides for it.
How to get a training set:
You can either use your own images and autotag them, or scrape other people's art from gelbooru.
Scraping from gelbooru:
- Get the Imageboard Grabber (see link above)
- Go to Tools > Options > Save and set the Folder to wherever you want to put your training images
- Go to Save -> Separate Log Files and click "Add a separate log file"
- Set location type to "Path and Filename", set Folder to the same folder you picked before, set Filename to
javascript:md5 + '.txt'
and set Text file content to%all:underscores%
and hit OK. - Close the settings
- Click "Sources" at the bottom and uncheck everything and then check Gelbooru.com
- Search for whatever and click on things to download them
- For best results you should have about 100 images. Any more than that won't make much difference. You can probably go as low as 10 or 15 but that might product nightmare abominations instead of what you want (unless that is what you want)
Autotagging images:
- Put all of your images you want to train on in a folder
- Launch the web gui and go to the Tagger tab (if you don't have a Tagger tab, install and enable the tagger extension)
- Go to "batch from directory" and point it at your folder of images
- Pick wd14-vit-v2-git as your Interrogator
- Hit interrogate and let it go
- (Optional) Get the Dataset Tag Editor extension and use it to clean up the tags. Its recommended to remove artist and character name tags.
Once you have your training set and the tags, all you need to do is run the trainer. Close the web gui if its open (kill the command prompt too to save RAM).
- Run LoRA_Easy_Training_Scripts/run.bat (run update.bat first if you haven't recently)
- File -> Load Toml and pick your Toml file (see below for my settings)
- Change the Base Model link to wherever you put your NovelAI model
- Change Saving Args -> Output folder to wherever you want to put your output
- Hit "Start Training" and wait patiently. It takes me about an hour to train a model. If training seems to not be making any progress at all, you might be out of VRAM. In that case, close everything else that you're running. If that doesn't help, kill the trainer and try again with a smaller network dimension (under Network args)
- Once its done, go to your output folder and look for last.safetensors. Give it a real name and copy it to stable-diffusion-webui/models/Lora
- Launch webui and start prompting! To use the lora, use <lora:whatever_you_named_it:1> as a tag, or pick it from the "extra networks" menu (click the orange icon below the Generate button)
Yeah thats basically it. If you have questions or suggestions or want to say hi, dm @sheepfeet1 on twitter.
Here's my .toml file. Copy and paste this and call it "trainer.toml". Change numbers if you want to experiment, but this seems to work well for me.
[[subsets]]
num_repeats = 5
keep_tokens = 0
caption_extension = ".txt"
shuffle_caption = false
flip_aug = false
color_aug = false
random_crop = false
is_reg = false
image_dir = "PUT TRAINING IMAGE PATH HERE"[noise_args]
[sample_args]
[logging_args]
[general_args.args]
pretrained_model_name_or_path = "PUT NOVELAI MODEL PATH HERE"
mixed_precision = "fp16"
seed = 23
clip_skip = 2
xformers = true
max_data_loader_n_workers = 1
persistent_data_loader_workers = true
max_token_length = 225
prior_loss_weight = 1.0
max_train_epochs = 15[general_args.dataset_args]
resolution = 512
batch_size = 2[network_args.args]
network_dim = 64
network_alpha = 64.0[optimizer_args.args]
optimizer_type = "AdamW"
lr_scheduler = "cosine_with_restarts"
learning_rate = 0.0001
lr_scheduler_num_cycles = 5
unet_lr = 4.5e-5
text_encoder_lr = 0.0001[saving_args.args]
output_dir = "PUT OUTPUT DIRECTORY HERE"
save_precision = "fp16"
save_model_as = "safetensors"
save_every_n_epochs = 1[bucket_args.dataset_args]
enable_bucket = true
min_bucket_reso = 256
max_bucket_reso = 1024
bucket_reso_steps = 64[optimizer_args.args.optimizer_args]
weight_decay = "0.1"
betas = "0.9,0.99"
Here's some guides that are way better than this one but use too many words: