Schneedcord Disciple LoRA Guide
Installation
To get started, install this:
https://github.com/bmaltais/kohya_ss
If you've installed the a1111 Stable Diffusion WebUI, then this should be a no-brainer. Follow the Windows installation guide on the page if you're on Windows. You'll have to scroll down a bit for this. Then run the setup.bat to install. It'll ask a few questions, so what you answer with is 'This Machine', 'None', and 'No' for the rest of the questions.
Dataset
Some words of advice that I didn't feel to put in the other categories. The training script will automatically downscale/compress your images to fit your training settings. If your settings are at 512x512, your images will be downscaled to 512x512. You don't need to process the images themselves once you have them, no cropping or downscaling yourself needed.
I cannot speak on the effectiveness of cropping things yourself other than it being an opportunity to make it so the AI only learns what you crop, which can be beneficial.
Gathering a dataset
There are two methods I use for gathering a dataset. I either surf the web and gather images manually (self explanatory), or I use an image grabber. This grabber is capable of scraping images off of websites automatically. It isn't compatible with every website, but it is compatible with most Booru operated sites, like Gelbooru, Danbooru, and Rule 34. (Note, you need a bit of setup to make Danbooru work). There are plenty of guides on how to use the grabber, but I learned off of this one.
Gathering a dataset is intuitive. You grab images you think would be beneficial for the AI to learn. You want to focus on the topic as much as possible, and leave any irrelevant things out. If you were making a LoRA that generates bananas, then you probably wouldn't want any other fruit in the pictures you get, nor would you want a picture that has a banana but there's also a bunch of other fruit in the picture.
And if you were making a Banana LoRA, you'd want a bunch of varied pictures. If you had 100 pictures of bananas with an identical background, or painted in an identical artstyle, then that will bake into the LoRA, which is no good. Who would want AI banana pictures with only one background?
To clarify without the banana example, the LoRA is largely driven by the dataset. You want your pictures to be both varied and quality. When a LoRA is trained, it doesn't know the difference between good or bad images, it just learns what it sees. The phrase "garbage in, garbage out" fully applies. If you were training for a fictional character to use for anime generation, you wouldn't want to use a cosplay/real life photo of the character. If you were training for a real life person, it wouldn't be very beneficial to train off a caricatured or cartoon appearance.
Pro tip. Use 'sort:score -animated' when getting any Rule34 stuff. You CANNOT have animated stuff in your dataset, and sorting by score means you get more quality images.
Tagging
Every image you have in your dataset must have a .txt file with tags corresponding to it. So if you had "image_1.jpg" in your dataset, then you'd need "image_1.txt" in the file as well. This is so the LoRA can tell what hell is going on. Don't train without tags. I did it once on accident and it used the result out of curiosity. It simply doesn't work well.
I use an SD extension that tags images in the booru style. It is linked here. If you want to tag a bunch of images (which is the feature you'll mainly be using), then you go to 'Batch from Directory'. Use the settings I give here. I have no idea if they are the best settings, but they've worked well for me.
Here's the exclude tags written out in text so you can copy it: 0_0, (o)_(o), +_+, +_-, ._., <o>_<o>, <|>_<|>, =_=, >_<, 3_3, 6_9, >_o, @_@, ^_^, o_o, u_u, x_x, |_|, ||_||
You can also see that it has a feature to add tags, and to delete any tags that might generate. This might be a feature you will make use of later.
Once you have any extra parameters set, you can interrogate and suddenly, your images now have tags with corresponding file names. Hooray.
Pruning and Keywords
You can stop here if all you want is a big ass folder of images and tags to go along with those images. I save a lot of my image folders with the tags included just so I can have tags to generate with.
However, for the purposes of making a LoRA, you will need to prune/remove tags, and also provide keywords (the keyword part is optional, but it can make things a lot better.) If you're thinking "wow, this sounds like a pain in the ass", you're right. This is my least favorite part of making any LoRA. I enjoy doing artstyle LoRAs the most because you don't have to prune tags at all. You can just generate tags and start training the LoRA.
The software I use is the Booru Dataset Tag Manager. Note that this is something I use because it is intuitive to me, but most likely, you will want to use the SD Extension Tag Manager. The SD Extension Tag Manager is just better, but again, due to the Booru Dataset Tag Manager being more intuitive for me, I use it. The Booru Dataset Tag Manager should come with a tutorial on the GitHub page, but I have no idea how to use the SD Extension Tag Manager.
Whether you use either one, the key is to prune all the relevant tags to your concept. If you were doing a Banana LoRA, you would want to prune the tags 'Banana, fruit, plant' etc. This will make it so the LoRA will learn the concept better. If you don't think that's enough, and you want the LoRA to be more effective and more responsive, you are welcome to add a keyword.
A keyword can be anything that isn't a regularly occuring tag. If we were making a Banana LoRA, I could make my keyword 'banana_keyword' and since there's no Booru style tag that is 'banana_keyword', the tag will be unfamiliar to any model you generate with. This is important.
Pruning tags and adding keywords is able to shift what the LoRA handles and what your model handles. When you get rid of all the tagged features of any particular character, then the LoRA will be able to pull its weight and generate what it saw. If your keyword is unfamiliar to a model, then the responsibility of using it will lie in the LoRA, which will make it 'activate' so to speak.
This doesn't mean you don't need to use corresponding tags for characters. If you pruned the 'purple hair' tag from a set of images depicting purple hair, you will still need to tag 'purple hair' when generating to ensure it'll work the best.
Training Settings
This is the subject of least familiarity to me. I will be honest, for a majority of my LoRA making, I never fucked with the settings I had because each LoRA took two and a half hours to make, so I never had the time to experiment.
The Formula
You're almost at the home stretch. You will need to rename your dataset folder to have a number of repeats, which would look like "2_dataset". This is how many times the LoRA will go over your dataset. Remember this formula: (Number of repeats) x (Number of images) x (Number of epochs) = Total Step Count
So if I had two (2) restarts, a hundred (100) images, and eight (8) epochs, I would have a total of 1600 steps. Use an Equation Calculator if you don't wanna do that shit on paper. It's just basic arithmetic, but I can't possibly know the experience of every reader here so... there ya go.
I am going to provide the settings I use for making a character LoRA and making an artstyle LoRA. Note that the character settings could also apply to do concepts, since it's a similar level of learning, but I haven't fully figured out a concepts setting or whether it could be different.
The sweetspot for training a character with these settings is around 1500 steps. The sweetspot for training an artstyle is 2000 steps. However, there are things to note about this.
- More epochs is always good. 8-12 is a good number. Going lower is fine, but you'll have less to test which can be a problem.
- Don't be afraid of a higher dataset. It can't hurt for the AI to see more pics.
- That being said, don't waste your time with too many pictures. I wouldn't go below 30 and wouldn't go above 200.
- Restarts impart basically nothing onto the final result. I'd just adjust them to arrive at the sweetspot.
The settings I have for characters is here: (Train on the NovelAI model. ATM I don't have a link for it, but ask someone else who does. These settings are automatically set at 8 epochs, but you can change this. I usually go with 2 repeats and 100 images with 8 epochs.
https://files.catbox.moe/fo83nq.json
The settings I have for artstyles is here: (Train on the AnyLoRA model pruned with no VAE. The settings are automatically set at 4 epochs, but you should change this. I'm just lazy and go with 5 restarts and 100 images with 4 epochs so I don't have to test too much.
https://files.catbox.moe/nylj7l.json
To load these settings, boot up the GUI for the LoRA Training script and go to 'Dreambooth LoRA', then open up a configuration file, which is the files I have provided.
Testing epochs
When training, your model will pop out epochs. These are LoRA files. Congratulations. If you're at this part of LoRA making, you just made a LoRA. These epochs will have different results, so you will need to test them.
Using the Additional Networks SD Extension, you can use an X/Y/Z Plot to test all of them at once! Hooray.
In the extension's folder, it'll have a directory for LoRAs. Put all your epochs in them. Then follow the pictures below.
Select one of your epochs.
Open the X/Y/Z Plot.
Select Addnet Model 1 (which corresponds to the epoch you selected), then click the book icon. Then select Addnet Weight 1 and set the weight of your LoRA. Then generate with your prompt and hope for the best! It'll cycle through all the selected epochs.
Coming up with your own settings
Using the Additional Networks Tab in the a1111 GUI, you can select whatever LoRA is in the extension's LoRA directory and load its training settings. So... you can load any LoRA and see how they trained it. If you like a LoRA, find out how they made it. If you like a concept LoRA, make sure to find out.
Picture Guide!
Gathering my dataset.
Tagging my dataset.
Pruning tags.
Time to train! Select the model you'll be training on, and go to 'folders' to set your output folder, output name, and dataset folder... which should look something like this:
As soon as your settings are selected, you're done! Good job. You can start training.