Training loras for Noobs: A comprehensive Guide

Back to Main Page

1: Training AI and by extension training Loras is by far the most nebulous topic with image gen. This guide will always be 'flawed', because hardly anyone knows what 'optimal' looks like. There are different strategies to a good result.
2: As such, I am also still learning. Lora making is an experimental process and different settings are good for different datasets. This guide should serve as a baseline and you may incorporate different settings and values as you see fit.
3: There will be less handholding compared to the other guides. You should have some knowledge of SD as a whole and know how to clone a github project.
4: I have not tested these configs for SD1.5.
5: My experience with lora training is primarily on NoobAI, which is very stable for Lora training. As such I may be biased towards the efficiency of some of these settings.

▶Current config download◀

Updated: 03/02/25

Summary

The primary goal of this guide isn't to make you a lora training master, it's to make you train loras.
This guide is split into two main categories: A TL:DR step-by-step checklist, and a more extensive explanation of topics.

  • The Checklist is meant to be used in tandem with the extensive guide. Each step has a longer section in the main guide. I recommend at least skimming the full guide first.
  • The more extensive portion will also mostly focus on the values and dials that (you) will actually use. There are like 70 separate values inside of training programs, and you really don't need to use the majority of them for a functioning Lora.

A lot of this guide is researched and based on the works of a few others, so a few shoutouts.
Valstrix Lora Guide is far more extensive on the nitty gritty of a lot of the dials you can turn, however it remains the best guide on the subject imo. Ctrl+F makes this a good glossary for terms.
Various people on the Furry Diffusion Discord.
Various Anonymous posters on that basket weaving forum.
CosmicElement for sanity checking several parameters and helping with optimizations.

The Checklist:

  • Collect your dataset. Aim at around ~30 images for a well functioning lora. Value quality over quantity whenever possible. Use either a program like Grabber or manually download the images.
  • Clean the dataset. This includes cropping and potentially centering characters if those are vital, for example dissecting a model sheet into individual images. This also includes potentially cleaning signatures and so on.
  • Create a folder called !{LORANAME}, inside this folder create another called 1_{TRIGGER}, where your loraname can be anything and your trigger should be identical to your trigger phrase. As an example: !Hymonie > 1_hym0n1e.
  • Place your images into the 1_{TRIGGER} folder.
  • Tag your images. Refer to the extended guide for proper manual and automatic tagging. Make sure your {TRIGGER} is the first word in the taglist.
  • Open Kohya_SS, click the lora tab, expand the configuration tab, then import an appropriate config file by clicking the 📂 icon.
  • Adjust these values:
    • Under model: The model you are training on, the name of the lora you are about to train, the !{LORANAME} folder for your imageset.
    • Under Metadata: Whatever you see fit to include.
    • Under Output: The folder where your loras will go. I prefer an "output" folder inside of Kohya_SS.
    • Under Parameters and Advanced: Batch size and Gradient Accumulation depend on your VRAM and RAM capabilities.
      • 8GB VRAM = 2 & 4
      • 12GB VRAM = 4 & 2
      • 24GB VRAM = 8 & 2
        On 8GB or 12GB you may choose the Grad Accum so that Batch*Grad = 16 for improved accuracy, like 12GB 4 & 4, at the cost of training time.
        If training crashes, reduce to a lower batch.
  • Hit 'start training' at the bottom.
  • ???
  • Profit
  • Make a grid in the WebUI to compare your different lora steps. I tend to use the '200' step one.

The Extended Guide

Training Lora's isn't particularly difficult. Finding actual good info on it however can be. Additionally, info on training can often look outright contradictory, this does not mean one is wrong and the other is right, at least most of the time.
Lora training has evolved quite a bit over the past years (months actually). By now this stuff is fairly robust, not likely to explode outright. Whereas during the SD1.5 era the hyperparameters (the values in KohyaSS) were super important, these days it's all about good datasets instead. The hyperparameters primarily dictate the efficiency of training and getting "more bang for your buck" out of weaker datasets.
The fact that 1 image loras come out pretty usable should tell you that this stuff is fairly hard to fuck up, but can always be a little better.

Datasets

Collecting your dataset is your most important step. You want to aim at ~30 high quality images, less can be fine but results in a more rigid lora, more is good if you aren't sacrificing quality.
Whether you are doing a character, concept, or style lora does not matter here, though some factors may be more important for your use-case.
As a general rule, you get out what you put in. If all your images contain simple backgrounds, then even with good tagging it may be difficult to not bias the lora towards simple backgrounds. That is not to say that you shouldn't include images with simple backgrounds, just that they ideally would not be in the majority (unless you desire this).

Some general quality markers

Good:

  • High resolution PNGs
  • No transparent backgrounds. If you have transparent backgrounds then you may need to turn those to black. Make sure that the backgrounds do not artifact too much in this process.

Bad:

  • JPGs. Anything with compression artifacts
  • Sketches, lineart. I recommend just not using these.
  • Nightshade and or Glaze aren't widely in use, contrary what Xitter may make you believe.

Character/Concept loras:

  • If you don't want to have a style bias, try to collect from a variety of sources.
  • Make sure the characters/concept features are correct in each picture.

Style loras:

  • Style loras benefit more from a larger dataset. That being said, quality over quantity. If you have access to a literal 100 images of fully rendered pics, then you shouldn't include lineart or sketches.

On Training on AI output

This is perfectly fine. The whole "Recursive training of AI on AI is the death of it" is largely a myth, perpetuated by people who have unrealistic assumptions on how training works. The myth comes from the era where models output hands with 6 fingers, and then training on those 6 fingers would further embed them into the model. Turns out people can just filter those duds out.
On the contrary, training on AI output is generally speaking extremely stable, which has benefits. At worst you reinforce existing biases inside of a model, such as samey posing and the like. At best you reinforce the better parts of your model.

How to collect your dataset

Lmao right click and save as.
Programs such as Grabber can assist in right-click save-as images in industrial quantities. I won't go into detail on how to use this program.
If you are training on AI output, then make sure you are keeping artifacts at a minimum.
While for training your images will get downsized to a regular SDXL aspect ratio, proper AI i2i upscaling nevertheless improves details, which stays during downscaling.

Create a folder called !{LORANAME}, inside this folder create another called 1_{TRIGGER}, where your loraname can be anything and your trigger should be identical to your trigger phrase. As an example: !Hymonie > 1_hym0n1e.
Inside this 1_{TRIGGER} folder you then place all your training images.

Clean your dataset

I don't have too much experience with semi-automated solutions towards dataset cleaning. For example, I would not recommend manually cleaning 100 signatures out of your dataset, however you definitely should clean them. Keywords to search here are "Lama cleaner".
Here are a few other things to keep an eye out on:

  • Aforementioned removal of signatures.
  • Transparent backgrounds -> Make them solid colors. Make sure this does not create artifacts around the edges. Again, use automated solutions.
  • You may want to remove SFX, text, speech bubbles, close-ups, multiple panels. Rarely do these become problems, though some datasets definitely have them in excess. If they are too hard to remove then don't bother. These usually do not become issues.

Some specific extras-tab models can be beneficial to reduce noise or sharpen images.
Heavy duty ones like 4xNomos8kSCHAT-L or simpler ones like 1x-ReFocus-Cleanly are some I have used on jpg data before. You don't even have to scale up, you can just do a 1x scale with some of these.

Tagging your images

Tagging your images is the second most important aspect of lora training. There is functionally speaking no difference between a style, a concept, or a character lora, it's all just data. When I train a Mickey Mouse lora on only Disney art, then I automatically create a Disney style lora. Tagging can help reduce unintended impacts of data, and this is mostly what differentiates the different goals we have.

Tagging principles

Tagging images depends on your use case, fundamentally speaking both character and concept loras are tagged the same way, style loras benefit from slightly different tagging.

The basic concept goes something like this: You are trying to teach the AI a new concept, this means creating a new tag for it to learn, we call this the activation or trigger phrase. This tag is supposed to "absorb" everything the AI cannot directly identify during training, in accordance to your supplied tagging file.

Character/Concept Loras

Image Tagging Mask
Alt Tag Alt Tag

Taglist:

hym0n1e, anthro, female, solo, kobold, bust portrait, jungle, pointing at self, fingerpads, ears down, blushing profusely, sweatdrop, embarrassed, kimono, black clothing, traditional media (artwork)

My goal here is a character lora, as such I want my activation phrase hym0n1e to absorb everything about the character. So here are the things I want to bake in:

  • Purple horns, purple nails, scale textures, red eyes, the style.
    And here are things I don't want to bake in:
  • Basically everything tagged. This includes, background, facial expression, clothing, pose.

The first 4 tags are mostly non-negotiable, it's (trigger), anthro/1girl, female/male, solo/duo, species(optional, from experience this helps).
Example for tagging a duo image: hym0n1e, anthro, human, duo, kobold, etc.
This isn't super strict but the consistency helps for later when we are training.

If you were the AI, your line of thought would be something like:

"Hmm these things on her head are nothing I can recognize from this taglist, this means it must be hym0n1e."

Consequently if I were to tag the aforementioned purple horns, then those would not appear through the activation phrase.

There is such a thing as "undertagging" and also "overtagging". Should I include "plant" related tags in addition to "jungle"? Should I include "Penetration" even though I already have "Vaginal penetration"? From my experience generally no, derived tags should be excluded. But if you are noticing biases towards jungles or plants then you may need to include more descriptors. Generally speaking we want to explain the scene with the most efficient tags possible.

E621 vs Danbooru tags

Either can be used and also mixed and matched. I would not use two tags that describe the same concept though.

Style loras

Basically the same thing, but you tag everything. Automatic tagging is a bit more useful here imo.
You still use an activation phrase here. Advanced users may want to look into Dora training methods. Supposedly these are very good for style loras but I haven't used one yet.

Manual Tagging

Get the BooruDatasetTagManager.
Again, not going too much into detail here since the program is relatively intuitive.
Useful hotkeys include Ctrl+E to create a new tag on the given image. Ctrl+W let's you add tags to all images, very useful for the first 4. The most important thing is that your 4 mandatory tags are at the top, with your activation tag first.
Manual tagging is good for control freaks, but can be a little tedious. This is what I use most of the time.

Automatic Tagging

Automatic tagging is a double edged sword. Furry taggers especially are very good, but for lora training purposes can have a tendency to overtag. Nevertheless they are incredibly useful for style loras or loras where you have large amounts of training data (in which case tagging matters less). There is also very little preventing you from automatic tagging your dataset, then cleaning the tags manually with the BooruDataSetManager from above.

RedRocketAutoTagger
Unzip the folder somewhere.
Run: Install.bat
Run: Run.bat
At the end you will see a link like "Running on local URL: http://127.0.0.1:7860", simply copy this url to your web browser.
This tagger is already set up to maintain the "Anthro, Solo, Species" order. Put your lora activation tag in the "Prepend" field to put it before everything.
Afterwards just drag your folder in, the default threshold is fine.
If your computer fans start going absolute apeshit, that's normal.

From my experience I usually have to prune redundant tags. Don't need "penetration, vaginal penetration, feral penetrating female," etc etc, just pick vaginal penetration.

Setting up Kohya_SS

There's multiple frontends for Lora training. Onetrainer is more straight forward, KohyaSS is kinda clunky, but I know it functions well enough.
The hyperparameters (smart boy word for machine learning variables) are what can turn a mediocre dataset into a great lora.
Kohya offers up to date parameters.

Install: KohyaSS Follow the installation carefully.
(Optional: Replace (Not overwrite!) the sd-scripts folder generated by the above, with the one from Feffy here. This has a bunch of fixes, especially for vpred training.)
(Alternatively the Dev branch for Kohya's sd-scripts).

Once installed you run it via 'setup-3.10.bat'. Go install it with '1', I believe '4' and '5' improve speeds but I could be wrong. You launch with '6'.

Kohya Basics

Kohya isn't the most user friendly interface, so here's some tips right away.
1: By default you are on the "Dreambooth" tab and not "Lora". This is the most common source of errors.
2: Download the config files at the top of the rentry By default they are configured for 12gb vram, but we can adjust this in a bit.
3: Import the config you want (eps or vpred) by importing it via Lora > Training > Configuration. Here you click the 📁icon, point to the config, and it will import settings. If you adjust your config template, you can rename it and hit the 💾 to save it as a seperate preset, useful for filling out paths to your model.

4: Here's all the values you need to change:
- Under model: The model you are training on, the name of the lora you are about to train, the !{LORANAME} folder for your imageset.
- Under Metadata: Whatever you see fit to include.
- Under Output: The folder where your lora epochs will go. I prefer an "output" folder inside of Kohya_SS.
- Under Parameters and Advanced: Batch size and Gradient accumulate steps depend on your VRAM and RAM capabilities.
- 8GB VRAM = 2 & 2
- 12GB VRAM = 4 & 2
- 24GB VRAM = 8 & 2
On 8GB or 12GB you may choose the Grad Accum so that Batch*Grad = 16 for improved accuracy, like 12GB 4 & 4, at the cost of training time.
If training crashes, reduce to a lower threshold. From my experience with a 3060 12GB and 32gb ram I can run 4&4 comfortably.

Once you have done this you should be able to hit The Orange Button (tm) at the very bottom and it will train.

Kohya Values

There's a ton of values, so I will only go over the most important ones.
As a general rule, most of the values chosen are for efficiency, and there are caveats.
Again, If you are curious about things like optimizers, learning rates, Network Rank&Alpha, and all that good stuff, check Valstrix Lora Guide.

Model to train on:
Generally speaking you train on the model you intend to use. However a few notes:
1: Training on a root model is better for overall compatibility. For example I could train on Noob 1.0 and still use it on PersonalMerge just fine.
2: Never train on lightning models. Always train on the closest source model.
3: Vpred training is iffy. There are a couple of pitfalls I have noticed so far. For example disabling zsnr can improve image quality on a surface level, but for the wrong reasons. I've seen models like this that are seemingly designed to be trained on instead to reduce discrepancies, but given they are recommending to not enable zsnr, I'm not sure if they aren't in that same pitfall. More testing is needed. That being said overall they train fine. If your training images are all very bright then getting darker scenes may need you to reduce the lora weight.

Name: I follow a nomenclature for loras. Name(Model)Version. For example Hymonie(NoobVpred1.0)v2.

Parameters:
Steps & Epochs:
The strategy you usually see is to calculate your epochs and steps. However 'epoch' is just an arbitrary multiplier, running 50 epochs and 50 steps would be 50*50, doing 2500 steps would give the same lora.

Batch and Gradient Accumulation(under advanced):
The batch decides how many times you train on each step. This directly increases Lora stability, as these get cross compared and higher batches have more to compare against. The Batch is primarily where your VRAM limitations come in.

Gradient Accumulation is a multiplier that determines how many separate batches the training should do before it cross-compares them and moves onto the next step. Batch&Grad at 4&2 would mean it generates 4, then another 4, then cross examines these 8, then moves to the next step.
In short, Gradient Accumulation allows you to get the benefits of higher batches, on lower VRAM, at the cost of training speed.
As a general rule your Batch*Grad should never exceed half your dataset, so for a 4&2 setting I would want at least 16 training images, if you have lower then you want to set these values lower, starting with the Grad.
There's been some investigation for optimal values. Valstrix suggests a total of 16 (he runs 8*2 on a 24GB card). The equivalent for me would be 4*4, however training this way is far slower and I haven't noticed too significant improvements over 4*2, it does however seem to improve details.
For your first loras and when testing I would suggest a lower grad to save you time.

Advanced:
Additional parameters: --v_parameterization --zero_terminal_snr for vpred, otherwise keep empty.
Save every N steps: Keep at 50 to save multiple versions of the lora. Since this setup usually has it's best version at around ~200, saving every 50 steps makes sense. You can theoretically save in 25 if you want more fine control.
Keep n tokens: Causes your initial tags to not shuffle around during training. We want our activation phrase at the top, so keep at least 1 token. 2-4 seems reasonable.
Flip augmentation: Doubles your dataset by flipping it during training. Disable if your character has asymmetrical features.
Noise offset type: The EPS config has Multires selected, importantly vpred needs this set to Original and with 0 values.

After Training

Copy your loras to your webUI lora folder. Write a simple prompt:

Prompt:

(masterpiece, best quality, newest:1),
hym0n1e,
solo, female, kobold, sitting,
<lora:Hymonie(Vpred1.0)-step00000050:1>
NEG:
worst quality, worst aesthetic,

Navigate to the bottom, open Script, X/Y/Z plot, for X choose "Prompt S/R", then include all your lora versions seperated by commas like so:
<lora:Hymonie(Vpred1.0)-step00000050:1>, <lora:Hymonie(Vpred1.0)-step00000100:1>, <lora:Hymonie(Vpred1.0)-step00000150:1>, <lora:Hymonie(Vpred1.0)-step00000200:1>, <lora:Hymonie(Vpred1.0)-step00000250:1>, <lora:Hymonie(Vpred1.0):1>
Your first entry should correspond to the tag in your prompt that it cycles through, in this case <lora:Hymonie(Vpred1.0)-step00000050:1>

For this lora I didn't specifically intent for all of Hymonie's features to get absorbed into the activation phrase, as I wanted it to be generically used for kobolds. I reinforce her features through prompting.

Alt Tag

Generally speaking the idea is you "Choose the version before they get samey" which in my case on this config is the 200 version. The higher versions may still be useful if you need the lora to be 'stronger', for example when using a different model.
I also like using this lora at 0.8 weight, something you may want to grid too.

Miscellaneous Stuff

Resizing Loras

You can resize Loras at minimal quality loss with this Resizing Loras.
I haven't messed with this yet, as this setup doesn't produce very large loras. This may be useful for resizing some chonkers in your library though.

Lora Metadata

Loras actually save all training related metadata into the lora. In the webUI you can hover over one in the lora tab, then click the ❗.
You can theoretically open a lora with notepad++ like you would an image with metadata as well, not as readable though.

Combating Style bias on Character loras (Regularization Dataset)

The keyword here is Regularization Datasets. These are especially useful for loras with AI output as training data, since generating more is trivial.
The main idea is this: You have your main dataset for a character in a specific style. You then add an equal amount of images into a second folder, in the same style but without the character. This helps the AI differentiate by comparing the two. "Hmm this style is similar but the character is not here, I will subtract the stuff that's similar then!" something along those lines. This Regularization Dataset does not need to be tagged.
As far as I understand you will need to double your step count when using these. The lora will take twice as long to train. As such I have not tested these much, I heard good things but haven't verified it.

Multi-Concept Loras

Haven't done one yet. Will update when I get to it.

1 Image Loras

Don't have toooo much experience with this yet aside from 1 lora I trained.
Set your batch to 1 and Gradient Accumulation to 0. Train.
Your resulting lora is gonna be RIGID. Use this lora to gen more Images for your dataset. Train a new lora, adjust batch and grad. Congrats, it's slightly less rigid now! Continue as needed.
I've seen settings floating around specifically for these 1 image loras, but this setup seemed to work fine too. Something something reducing Network rank and Alpha primarily.

Other resources

Valstrix Lora Guide Comprehensive. Most of this config is just his setup. Nice guy. Good taste in mons.
THE OTHER LoRA TRAINING RENTRY Outdated: Last update 18 Aug 2023. Very well written explanations, just take any 'meta' claims with a grain of salt. This was primarily written during SD1.5's peak before SDXL.
Quick and Dirty LoRA Training Writeup
No Hardware CivitAi Lora Poorfriend guide for training on Civit.

Edit
Pub: 23 Jan 2025 17:17 UTC
Edit: 03 Feb 2025 21:03 UTC
Views: 371