Yet Another /hbg/ Training Rentry

Last updated: see at the bottom

This guide is meant to replace https://rentry.org/59xed3, an outdated piece of crap at the time of writing.

I'll make an attempt to generalize on it, compile everything anons already tried (besides LoRAs), and explain how things turned out the way they turned out, give an overview. All the info is there, but it's incredibly scattered. Therefore this guide will not handhold you through every little thing, but at the same time it will give you more model-agnostic knowledge about ML and various techniques or tricks used to train diffusion models specifically. If you just want to train a lora, see GETTING STARTED. Read this rentry from top to bottom once and then go back to parts you didn't understand if you want to immerse yourself in (but a 100% understanding is not necessary anyway).

Reply to the /hgg/'s OP if you want to see something changed/added on here or if you want to discuss something.

CURRENT AGE NINJA WISDOM

To be completely honest, I'm tired of seeing anons trying to invent shit up/using made up shit to communicate anything. I'll stick to using more formal (yet still human) language than your usual baking guide, so if you see some unknown word or weird phrasing, there's a chance you may know it under some alias. This section won't make a lot of discoveries, but it'll show you another consistent point of view which is somewhat required to understand a lot of what's coming next, so please read it even if you think you already know everything.

First, let's settle on what a "model" is. There are many things called a model. Generally in ML, a "model" or "network" is just a scheme, a draft, a diagram or a plan of data transformation (also called a function), as it describes all operations made on input data given some closely tied weights. Weights is what we have in our .safetensors files, a weight dictionary of sorts. Each entry has its own key (a name), and an associated tensor. Such entry is used for one operation, usually.

In case of diffusion models, a model receives text input and some random noise, and transforms it to an image using pre-trained weights. However, often anons use "weights", "model", and sometimes "checkpoint" interchangeably. I'll continue doing that, but I'll try to make a distinction where it matters.

Glossary

Below is a glossary table. Some of these are intentionally describe things in a very general way. If you don't yet understand some details, that's okay.

Term Description for dummies Details
Tensor Yet another way to store an array of numbers/a vector It is used in PyTorch because it allows storing gradients conveniently (for training)
Baking Household name for training a model
Model/Network The thing you load in your UI Describes operations on data using various operations and weights
Weights The thing you load in your UI A specific dictionary of tensors, which are used by the model
Checkpoint The thing you load in your UI A snapshot of the model weights saved during training
Trainer Software which trains the models.
Diffusion A way of reversible conversion of an image into noise. Diffusion models attempt to reverse the diffusion process (or to remove the noise) To make this process controllable and to have a way to reverse the diffusion, various mathematical models exist, such as DDPM, EDM or Flow Matching
Timestep Similar to just "step" during inference, it represents how much noise there is in the image. Except this term is mainly used for training, and 1000 means the image is fully noised, 0 means there's no noise. This term is used to describe the diffussion process, not reverse diffusion. The ACTUAL amount of noise in the image depends on noise schedule, and timestep shows where exactly we are on the noise schedule.
Noise Scheduler DDPM, EDM, Flow Matching, and many many more. Used to control the specifics of adding or removing noise (noise schedule) A lot of math and differential equations specifically
v-pred Should not be confused with ZTSNR. Velocity prediction, usually opposed to eps prediction, essentially a better way to represent "noise" for the model. Good and easy to read rentry: https://rentry.org/wtfvpred Different way to parametrize the model, so the it doesn't predict the noise to remove, but rather velocity. A few steps in the right direction and you'll get "modern" Flow Matching.
ZSNR/ZTSNR Should not be confused with vpred. A noise schedule which destroys the image completely on the last timestep, and this schedule can't be reached via eps prediction Zero Terminal Signal-to-Noise Ratio. Since SNR measures the amount of the signal coming from current image, by ensuring it's 0, you make sure you train the model to predict from pure noise.
Kohya https://github.com/kohya-ss This word is also sometimes used to refer to a trainer of his, named sd-scripts, the most popular and used SD trainer
Inference A process of computing the model outputs (predictions) During training the same thing occurs, except it's called forward pass and it also captures gradients for backprop.
Backpropagation A process in which gradients are computed from given examples https://en.wikipedia.org/wiki/Backpropagation
Gradients Changes proposed to the model, computed through backprop They're actually passed to an optimizer which then applies changes to the model. Gradients are usually very big, since for each trained parameter exists a gradient.
Latent space A way to represent various meanings using a tensor A set of all possible embeddings in this space https://wikipedia.org/wiki/Latent_space
Embedding A tensor representing some specific meaning in a given latent space. For example, you can represent a "cat" as [0.2, −0.4, 0.7] https://wikipedia.org/wiki/Embedding_(machine_learning)
VAE A model for lossy compression of images from RGB space into a latent space (you can call the outputs of an encoder an embedding too) Consists of two parts: encoder and decoder. Encoder is used to translate from RGB space into latent space, and decoder translates from latent space to RGB space https://en.wikipedia.org/wiki/Variational_autoencoder
U-net A model which converts noise to an image given a certain condition Denoising happens in the latent space of the VAE. In SD it's CNN-based with cross attention mixed in, but nothing stops one from making a U-shaped transformer.
Condition A "request" to a model of sorts Often represented by an embedding
CLIP A set of models which convert text or images to an embedding It consists of two parts: vision encoder and text encoder. Both models share the same latent space. However in SD only the text encoder is used, and vision encoder is discarded.
Text encoder A model which converts text to an embedding See CLIP
Vision encoder A model which converts image to an embedding See CLIP
Loss A result of comparing the model output to an example. Depending on the context this may also mean "loss function" which is a function used to obtain loss Typically Gradient Descent aims to minimize the loss. Gradients are calculated based on loss function. And obviously you can't calculate gradients for a non-differentiable function, so loss function also must be differentiable.
Dreambooth Incredibly old thing. Despite many suspicious rumors, it's just a way to preprocess your dataset, attempting to make the model learn a keyword. If someone says Dreambooth there's a possibility they had a 'finetune' in mind Many old guides/trainers refer to dreambooth as if it was a finetune, however you can also train a Dreambooth-style lora, since the finetuning method doesn't actually matter. https://dreambooth.github.io/
LoRA A collective name for a small dependent model (!), targeted at modifying the main model, obtained through the process of low rank adaptation Initially targeted only at layers capable of easily being decomposed, i.e. linear layers https://arxiv.org/abs/2106.09685
LoRA distillation Extracting the difference between two models and saving it as a LoRA Usually this involves SVD to "compress" the difference
Δ/Delta A difference between amounts of thing or things. Differential.
Hypernetwork Outdated way to add knowledge that was used at NovelAI Essentially it's a streamlined way to stack more weights/layers to the base model, so the architecture doesn't stay the same. For one reason or another, no one uses it anymore.

All possible ways to modify existing models

For now let's draw a line between fundamental methods of modifying the weights and specific implementations. This is important, because often people fail to make this distinction, and mix everything into a huge mess. For example, LoRAs are often considered its own separate entity, when in truth it's just some other model.

Method Description Notable examples
Model fine-tuning A process in which the weights are modified through backpropagation and gradient descent Regular model learning, LoRA training
Model Merging Merging is a way to create a new model based on a "recipe" on how to modify the existing weights or part of it. This is an incredibly broad way to change the weights, many things fall under this category. You can argue that even fine-tuning technically follows this definition, but the main difference from fine-tuning is that during merging the data doesn't come from training examples directly, and there's no backprop, no gradients. Applying a LoRA, applying textual inversion, SVD LoRA extraction
Model "surgery" Removing or adding some parts of the model, or even integrating other models. Usually followed by fine-tuning adaptation to make the model usable again. Chroma, Hypernetworks

LoRA? LoCon? LyCORIS? Why there's so much algorithms?

Original LoRA is only targeted at dense layers (also called linear or fully connected layers), because the updates for this layer are easy to decompose since the whole dense layer operation is just matmul. This is very useful for Transformers, but not as much for our CNN-like SD/SDXL's U-net.

LoCon argued that the convolution operation can also be decomposed, and therefore it should be possible to create an adapter for it. Thus, by applying LoCon you can train convolutional layers of SD/SDXL in your LoRAs.

LoHa, LoKr and other algorithms implemented in LyCORIS attempt to replace the original matrix factorization mechanism behind LoRA, however they all are still essentially doing the same thing (attempting to reduce the number of trained parameters), but they have slightly different properties.

If someone talks about a "lora", there's a good chance they mean LoCon because this is de-facto LoRA for SDXL. Talking about one of the LyCORIS algorithms is also a possibility.

Below is a more technical explanation of how LoRAs work. If you aren't ready to read it, you can skip it.

Gradient Descent background

To understand what LoRA is, we need to understand how gradient descent/regular finetune works first. Well, for all practical needs the only thing you need to know is how the weights are actually updated. Don't worry, it's quite simple. To simplify it even further, let's imagine our imaginary model consists of exactly 1 parameter (1 number).

Let's denote:

  • W(step s) is our trainable parameter, at any given step s during training, our weights. Let's suppose it's equal to 0.4.
  • ΔW is a proposed change to the weight W we've obtained in one step of training. Let's say we've obtained a number 0.1.
  • W(step s+1) is the result of applying ΔW to W, aka the updated weights.

So...yeah, it's gradient descent. The update rule is kind of obvious:

W(step s+1) = W(step s) + ΔW
W(step s+1) = 0.4 + 0.1 = 0.5

Since this all is actually linear algebra, all variables here are secretly matrices. Every parameter in the model gets updated according to this rule.

So what's the deal with LoRA?

Original LoRA method proposes to freeze the weights of the main model W and train the weights update matrix ΔW decomposed into two low-rank matrices. So what does it mean? ΔW is decomposed into two matrices: A (let's refer to it as W_down) and B (W_up). Both of these matrices must be convertible back into ΔW. And to get ΔW back, you apply matmul to both matrices.

ΔW = W_down x W_up

Matrix A is a low-rank down-projection matrix W_down, and matrix B is a low-rank up-projection matrix W_up, and this is the stuff that actually gets trained, this is your LoRA. But what does "low rank" part mean exactly?

Rank is a "complexity" of the matrices, or a minimum possible number of columns or rows in the matrix to represent the data. You see, when you try factor out a matrix ΔW of shape (n, m), you need matrix A W_down to be of shape (n,r), and matrix B W_up to be of shape (r,m) where r is rank. This is required so that you can then perform matmul ((n,r) x (r,m) -> (n, m)) and then merge the LoRA back to the weights. Incidentally, the largest possible (full) rank r = min(n, m).

So with rank you're introducing a third variable which can control how much information your model learns. If you set rank to a small value, you will need to train much less parameters, this allows you to modify the model (albeit with less precision) using much less memory.

For LoRA weights, i.e. W_up and W_down, regular W(step s+1) = W(step s) + ΔW gradient descent update rule applies!

W_up(step s+1) = W_up(step s) + ΔW_up
W_down(step s+1) = W_down(step s) + ΔW_down

So for the entire model you can imagine the training process iteratively in your head like this:

W_effective(step s) = W + [W_down(step s-1) + ΔW_down] x [W_up(step s-1) + ΔW_up]

where W_effective(step s) is a model which you do the forward pass with, W is frozen, W_down(step s-1) and W_up(step s-1) are THE trainable parameters modified via ΔW_down and ΔW_up obtained from the backward pass of the last step, respectively.

In reality, W_down(step s-1) and W_up(step s-1) aren't separated or anything, and gradients for them are computed at the same time. And you never really merge the LoRA to the weights during training, you obtain weights directly when you need to use them.

If you want to merge a LoRA into a model, you do this:

W_new = W + lora_strength * W_down x W_up

Why do you need LoCon?

Everything you read above is 100% correct only for linear/dense/fully connected layers. Convolution layers are not dense (fully connected, linear) layers, therefore you can't simply decompose it down to two matrices. It's not so straightforward to represent convolutional operation as a matmul, but it can be done by unrolling and rearranging the data and weights in a specific way, and this is exactly what LoCon does. It allows creating a LoRA for convolutional layers, but it's not the only way either.

Hopefully all this will make schizo scribbles of Kohaku a little bit easier to understand. Also, just because you can train convolutional layers, it doesn't necessarily mean you should. (<- this statement shouldn't stop you from training convolutional layers either)

todo: need to add a note about network alpha
todo: maybe elaborate on specifics of locon or other lycoris algorithms? unlikely

A brief history of relevant models

Stable Diffusion

THE THING that started it all. Used DDPM as a noise scheduler. Go read the ancient rentry if you wish dig into old stuff. For archiving purposes, I'll also leave this link here: https://rentry.org/RentrySD-backup

Stable Diffusion diagram

There were various attempts to make it generate anime images, but after someone posted an archive of NovelAI models and stuff on 4chan, it all became irrelevant. Leaked models were trained on top of Stable Diffusion 1.4.

Furries also were training their models, and long after the release of SD 1.5 and near the release of NAIv3 HLL-anon attempted to bake an anime finetune of fluffyrock. Like NAIv3, these models used v-prediction diffusion target with ZTSNR schedule and a high base image resolution. https://rentry.org/5exa3

Stable Diffusion XL

Very similar model to SD except it's newer, bigger, and has two text encoders. Initially it also featured a refiner, it was quickly thrown out. Despite having larger base res than SD (1024x vs 512x), it used exactly the same DDPM noise scheduler, which had some problems. All derived models inherited these flaws unless specifically mentioned otherwise.

Stable Diffusion XL diagram

Derivatives of SDXL

NovelAI Diffusion Anime v3

https://blog.novelai.net/introducing-novelai-diffusion-anime-v3-6d00d1c118c3
The first anime finetune of SDXL. Didn't leak this time, so it's paid. Featured a number of improvements over vanilla SDXL, described in a technical report published long after the model was released:
https://arxiv.org/abs/2409.15997

Pony V6

https://civitai.com/models/257749/pony-diffusion-v6-xl
Trained on 2.6M images, with roughly even split between anime, cartoons, furry and pony. The first SDXL model that could generate "porn" locally.
https://rentry.org/ponyxl_loras_n_stuff

Animagine

https://huggingface.co/cagliostrolab
https://cagliostrolab.net
V3 is the first "usable" version trained on 1.2M images and it came out at the same as Pony V6. Didn't get very popular due to very well defined AI-generated style and weak knowledge.
V4.0 is a lot better, and it was exposed to much more images but it still loses to other models.

Illustrious

https://civitai.com/models/795765/illustrious-xl
Trained by OnomaAI on a 7.5M danbooru dataset. Resumed from Kohaku XL-Beta - Revision 5. The first and the most popular version, 0.1, initially got leaked via a torrent. The subsequent versions were released after reaching a certain number of donations, which were mostly cut off due to weird monetization drama around Illustrious v3.5.
There's a technical report available for versions up to 2.0: https://arxiv.org/pdf/2409.19946
There are blogs available for other versions: https://www.illustrious-xl.ai/blog
https://rentry.org/illustrious_loras_n_stuff

NoobAI

https://huggingface.co/Laxhar
https://civitai.com/models/833294
Trained by Laxhar Lab on a mix of ~13M danbooru and e621 images. Comes in two flavors: eps and v-pred. Below is a somewhat simplified table describing rough relations between versions.

Version Resumed from Description
eps 0.5 Illustrious 0.1 Only U-net was trained, text encoder is the same as in Illustrious 0.1. Some consider it the last model "untouched" by furry scribbles due to that.
eps 1.0 eps 0.5 Text encoder was trained. The "final" version, and the base model for every subsequent vpred model.
eps 1.1 eps 1.0 A minor refresh of eps 1.0.
v-pred 0.5 eps 1.0 The very first v-pred model, trained for roughly 5 epochs in order to adapt existing knowledge to a new prediction target. Due to a bug in timestep weighting, the last (0 SNR) timestep wasn't trained at all. Related issues in sd-scripts repo: [1] [2]
v-pred 1.0 v-pred 0.5 The final v-pred version of NoobAI.

Nothingburger/DOA "weights available" models

  • Stable Diffusion 2
  • Waifu Diffusion 1.5
  • WDV
  • Stable Diffusion 3
  • Stable Cascade
  • Resonance Cascade
  • Flux
  • Chroma
  • PixArt series
  • Sana
  • Kolors
  • Qwen Image
  • Lumina series
  • Neta Lumina

Common training styles

todo: describe concept, style, character training
todo: describe keyword, illustrious and noob-style training
todo: borrow from https://civitai.com/articles/138/making-a-lora-is-like-baking-a-cake
todo: what to do if data is scarce

Uncommon training styles

todo: block weighted training, describe copier method https://rentry.org/Copier_Method_Notes, lora extraction from ft, multiconcept training, etc

Regularization images (aka Dreambooth)

Dreambooth is an old ass technique which aims to teach the model a specific concept by binding it to a token (you can also bind it to a sequence of tokens). It is supposed to work well in situations when obtaining high quality captions is not a possibility (this is not a problem now, we have taggers and VLMs, and you can always manually caption the dataset if it's sufficiently small). It does not inherently imply fine-tuning the model directly, you can train LoRAs using Dreambooth as well.

If you don't follow Dreambooth like a gospel, some ideas from here may give you some more options you can test if you are not satisfied with your current results.

This is a pretty strict method, and in truth no one follows it to word because the results you get are pretty stiff. However, the ideas here are pretty interesting and naturally evolve into things like tag frequency-based weighting or multi-level tag dropout. We even came up with such new words as "bleedover", the concept of which was first discussed here.

Authors propose a "subject" thing instead of just "a concept", and they search for a token to associate the subject with. That token must be extremely rare, in order to prevent accidental bleeding, like "sks" (this is retarded outside of autistic experiments and you should just name the thing like a normal person would or give it a short name). However, you can as well throw the CLIP out and generate images using IDs or shit, because this is what your model would effectively become if you train it just like that.

So they introduce an "autogenous" "class-specific" "prior preservation" loss (function) to help avoid that.

  • "prior preservation" part means that this loss (function) must act as a sort of a limiter, which prevents the model from forgetting "prior" knowledge. Allegedly this also allows the model to compare the existing knowledge and new one during learning.
  • "class-specific" part means that our subject losses (functions) have to be grouped per class. In practice, this (function) thing expands into having two (or more) sets of images, and you set different loss multiplier for each set. The first set is for "generic class", and the other sets are for "specific subjects"

    Class Subject(s)
    Car Bugatti Veyron EB 16.4, Toyota AE86
    Hatsune Miku yuki miku, sakura miku, rabbit hole miku
    Breasts Pointy breasts

    So if you want to train a model using Dreambooth to generate Bugatti Veyron EB 16.4, aside from having a Bugatti dataset, you also must have a generic "car" dataset. This dataset would be called regularization "prior" dataset, and you typically set low loss weight for it. But how do you obtain such dataset?

  • "autogenous" part means that all the images for the class (not the subject!) are obtained via the model itself, before training. Sounds weird, I know, but you are supposed to generate some images for the class using the model you are going to train on. Pair the images you generated with the prompts you used to do it, and you get your regularization dataset.

    Obviously you absolutely must not stack 100500 snake oils, you have to use a fairly clean inference config, large number of steps, no meme samplers, no negative prompt, no loras, no cherrypicking, etc, otherwise you're going to distill your shit into the model, not preserve the prior.

    And you don't necessarily have to fixate on generating images yourself, if you have access to the dataset the model was trained on to begin with that is. You can use a small part of it, 10 regularization images for a 10-image dataset for example may be a good start. I imagine this wouldn't make much sense unless you a way to exactly recreate base model training conditions.

todo: experiments

GETTING STARTED (WIP)

You are new and don't know what to do, what can you do in the first place, and what hardware you need? Follow along.

Setup

Well, hardware needs vary wildly. With some understanding you can cut corners here and there, but generally you DO need a fairly recent and powerful GPU to train something.

GPU VRAM System RAM Expected experience
8GB 16GB A bare minimum for LoRA training, and you can't go any lower than that. Pretty slow, but can get things done
12GB 16GB Unlocks you some more options for training
16GB 16GB You can feel the freedom. Fairly comfortable LoRA training experience
24GB 32GB You can natively fine-tune SDXL's U-net with some semblance of comfort and quality
32GB 32GB You can do practically everything

Having a second GPU with a different VRAM amount doesn't usually benefit you all that much, however you still can utilize it in some ways. For example, you can test your results mid-training, which certainly gives you some edge. If you want to speed up training by spreading the load, you must use homogeneous setups, for example 24GB+24GB. Having many weak GPUs is kind of pointless, consider going after MultiGPU only if you already own a 24GB+ GPU.

If you have the opportunity to run your PC 24/7 and access it over the network, do it. It's more convenient that way, and besides that, you can dive into self-hosting, and have access to everything on it while being anywhere in the world. Just watch your electricity bills and try not to burn your house.

You can train and self-host on Windows, but Linux can offer you some considerable advantages, especially when it comes to self-hosting and having remote access to it. And besides that, it just trains faster. If you've never used Linux, you can choose literally any popular Linux distro, it will get the job done. Dual-booting is a pain due to Windows not supporting good filesystems, so I'd recommend wiping everything (at least on your boot drive) before you attempt to use Linux. Windows filesystems also suck on Linux, and may screw up your files, be aware of that.

This guide will assume usage of Linux, however you can do most of the stuff on Windows without any problem.

Training environment

For standalone trainers like easy scripts or OneTrainer, you typically don't need anything besides the installation script. However, sometimes python virtual environments can break, especially if you system changes python version. Therefore you should use uv to manage Python and its installations. It's also faster than pip for creating working venvs and it downloads packages in parallel, you won't regret using it for any python project listed here. Install it like right now, because we will be relying on it.

For basic training setup, having sd-scripts and easy scripts installed is sufficient. So let's start with LoRA Easy Training Scripts. If have your own server and you consider yourself somewhat advanced, you can start by setting up a headless backend.

todo: an easy-to-reproduce lora training setup for various trainers, with test dataset and everything
todo: a finetune setup

I want to make a good LoRA...

Do it. Don't stop.
Decide what you want to train, collect the dataset and put it in the oven. Read the rest of this rentry while it's baking. Expect to pour a lot of your time, you will not get good results quickly. Make an excel sheet with your parameters and note your results. Trust me, this is going to help you a lot. Look at the results, change the training options and repeat the autistic cycle until you're onto something. Remember this: what you train and how you train is 95% of the model, and only 5% of that is the training config.

...and I'm carefully arranging all the elements until it's perfect and...

Don't do it. Just don't. Stop. ML is never about perfection or good numbers, it's about finding the right thing to do as quickly as possible. Look above.

Trainers

todo: describe specifics of each one and check for sdxl support

https://github.com/kohya-ss/sd-scripts (sd3 branch has some more features)
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/67372a/LoRA_Easy_Training_Scripts/tree/flux
https://github.com/67372a/sd-scripts
https://github.com/bmaltais/kohya_ss
https://github.com/Nerogar/OneTrainer
https://github.com/Mikubill/naifu
https://github.com/bghira/SimpleTuner
https://github.com/tdrussell/diffusion-pipe
https://github.com/ostris/ai-toolkit

Headless remote setup with LoRA Easy Training Scripts

So you have a laptop and a home lab server, just like me, huh? You can run all the computations on your home lab while still having a nice GUI on a laptop. Since easy scripts is not a web ui, it doesn't provide an easy way to do this out of the box and you have to resort to using good old server/client arch. This will also work with colab, so do whatever you want with that knowledge.

In this section I'll show you how to set up easy scripts to work on your local home headless server, while leaving the GUI on another PC. This should work with either Windows or Linux server/client, but what the fuck are you doing, if you are going this far yet still using Windows on a server???

Models, datasets and other stuff must be present on a remote machine! You will have to specify remote paths inside the GUI to make the training work.

Backend

On the remote machine with GPU(s) you can use either a docker image, or do a manual setup. We'll not discuss advantages of podman vs docker here, so we are going to do a manual setup. We only need the backend:

1
2
3
4
5
6
7
git clone --recurse-submodules https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
cd LoRA_Easy_Training_Scripts
cd backend

uv venv --seed --python 3.12 --relocatable sd_scripts/venv
source sd_scripts/venv/bin/activate
uv run installer.py colab

Edit config.json:

  • if you want to expose the backend to the internet via a tunnel, set up cloudflared/ngrok on your remote machine (find how to do it on your own please)
  • if you just want to run it on LAN, set remote to false (this is what I do)
$ nano config.json

Finally, run the backend

$ uv run main.py

In the future, you can run the backend without activating the venv like this:

$ uv run sd_scripts/venv/bin/python main.py
Validate it's working

Find out LAN address of the machine (the command may vary depending on the distro):

$ ip a | grep inet

On a separate shell, make sure backend is working and you've got the right URL by curling http://<local ip>:<port in config.json>:

$ curl http://192.168.0.2:8000/is_training
{"training":false,"errored":false}

Client

On local "weak" machine we are only going to need the frontend:

1
2
3
4
5
git clone https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
cd LoRA_Easy_Training_Scripts

uv venv --python 3.12 --relocatable
uv pip install -r requirements.txt

Run it like this:

$ uv run main.py

Just above the "START TRAINING" button you are going to see an URL. Make sure you can access the server from this machine, and the URL points to your server, in the case above it would be http://192.168.0.2:8000.

Dataset prep

So to finetune a model, you must have a dataset. It's simple enough you may think, however the devil is in the details. What does it mean "to have a dataset", precisely? Right, precisely there's no single thing you can call a dataset.

Difficulties

Imprecisely, in our context, dataset is a set of text-image pairs. Anything that can be described that way may be called a dataset. For example, this folder is a dataset:

1
2
3
4
5
6
7
8
9
.
├── char_A
│   ├── image1.png
│   ├── image1.txt
│   ├── image2.png
│   └── image2.txt
└── style_B
    ├── image1.png
    └── image1.txt

Why am I telling you this? Because there's literally no standardized way to handle datasets. Every piece of software invents a completely new approach to editing it, storing or exposing or whatever. On top of that, you will haphazardly edit it yourself. That's just the reality and you need to accept it, but actually dealing with it is not so hard, if you came prepared. So to make our life easier, let's first look at the typical workflow of small-scale baking:

  1. You obtain a dataset
  2. You curate the dataset
  3. You train on the dataset
  4. You look at the results
  5. If you are not satisfied, go back to step 2.

Obtaining the "first draft" of a dataset usually isn't a hurdle. However, note: danbooru stores tags using "underscore_format", without commas. For more details, see Annotation types and Scraping.

Curating the dataset is...a bit tedious, alright, but seems like doable. You just filter out obvious garbage and go ahead. What can go wrong with that?

Training is the most time-consuming process, depending on the dataset and your hardware. You convert your dataset to whatever format the trainer understands and go ahead.

You load the model, and your first thought is... It's shit! Like, a complete and utter trash! This is where it's at.

Schizo babble ahead.

You immediately start noticing some weird patterns: on every gen there's some kind of object you don't recognize being in the dataset, or it just doesn't work at all unless you prompt some random text parts from the dataset. You sift through the images and notice the thing that appeared on all your gens. You delete it, and rebake, just to see that you still see that object on every gen! At this point you read somewhere that all you need to do is to tag this object on every image you can find it on to "negate" it. So you're thinking of adding that deleted image back, but the problem is: it's deleted! You didn't save it anywhere. So you search it again in the program you already got it from before once and grab it, along with the tags, but now you're encountering a name conflict: 123.png already exists in the dataset! So you rename both .png and .txt to image123 or something to resolve it. Now you train again, and the model still fucking sucks. Looking through trainer messages you read something along the lines of "could not find a caption file for image123.png, proceeding with empty captions", but you don't pay it much attention yet. You found another incredibly good image you want to add in the dataset in the meantime, and without it the bake just can't be good. But there's a problem: it has transparent background, so you don't know how the trainer would react to it. You decide not to gamble and google how to fill transparent background, so you're spending next 15 minutes learning how to replace transparent background in something like photoshop. You put that image into the dataset and bake. Finally, the model starts taking a good shape, but it's still kind of weird. You randomly glance at the dataset folder and notice: there's no image123.txt. Instead, there's only imag123.txt. You think about it and then realize that this is what that message in the trainer earlier was about. You quickly rename it and proceed to bake. You see the old problem reappear: that object started appearing everywhere AGAIN! What's the deal with that? Since the bake took so long (you aren't training on an H100, right?), you already forgot what the fuck did you do to the dataset. You patiently look through all of the images again, but every image with that object is tagged well. Or at least, you're thinking that. After carefully inspecting the dataset twice, for the third time you notice: you forgot to tag the image you added back! You fix it and train the model. It is your best model, but it's still a bit iffy. But you give up because you're too tired of dealing with this bullshit.

And this is just some hypothetical scenario I just came up with, without any weird shit like merging/balancing datasets or using taggers to tag images, and I didn't even mention anything about finding the right hyperparameters! And god forbid, someone updates the tags for the image you downloaded on danbooru itself!


Anyway. The point is, you should track all your changes through git or something like that, and divide the dataset preparation into stages. You don't have to do it, but this will make dataset curation SIGNIFICANTLY easier. Specifically, you should store at least this much in different folders:

  1. The original untouched files, put conveniently in one place. If there's no tags, that's okay. Who knows when you'll need them
  2. Very obvious garbage should be thrown out, and the rest put here
  3. Basic ass editing. If you're dealing with a bad dataset, this is where all pre-processing should happen
  4. Fully captioned pairs
  5. Manually cleaned up images if there are such + updated tags
  6. WIP dataset you're actively iterating on
  7. A copy of the final dataset in a trainer-digestable format

If you want to add a new image to the dataset, it should go through all of the stages. If you want to remove an image, you can do it as long as it's tracked through git. Make sure every change you do is logged and you can navigate through the history without much difficulty.

Some tips:

  • If you are on Linux, look up various filesystems and their features. BTRFS, for example, allows storing duplicated files without filling up the space.

Annotation types

The "text" part in "image-text pair" refers to an annotation/caption of the corresponding image. Various sources use its own format to describe the image, which makes it non-trivial to use such data directly as a prompt for a diffusion model.

Storage formats

This depends entirely on the source you are using to obtain images. Pixiv uses its own format, danbooru uses its own format, sadpanda uses its own format. Not to mention that there is no single tagging system and even various boorus (danbooru, konachan, e621) use their own, most often non-compatible tags to describe similar things. However let's focus on danbooru flavor since most anime models use some kind of danbooru tags.

Example danbooru entry https://danbooru.donmai.us/posts/189185:

1
2
3
4
5
tsukumo_(soar99) 
yotsubato! 
ayase_ena ayase_fuuka hayasaka_miura hiwatari koiwai_yotsuba 
commentary_request 
5girls bedroom multiple_girls raglan_sleeves school_uniform window 

As you can see, danbooru uses underscore _ to connect the words within a tag. The tags are separated by space, and special characters such as '!' or '(' as well as digits are allowed to be inside the tag.

Prompt formats

In SD/SDXL, CLIP text encoder is used. CLIP was designed to accept natural text, i.e. any text data. Anything that can be encoded using the tokenizer, will be accounted by CLIP. However original CLIP has a number of flaws, most notably its 75 token max length.

The most important bit here is that you can't just transfer text prompts from one thing that inferences the model to another.

todo: describe how nai adopted comma-separated tags without underscore, backslash escape for parentheses, etc

Emphasis

todo: outline what is emphasis, nai, a1111 and comfy formats for it

Prompt length

CLIP is limited to 75 tokens per embedding (excluding bos and eos tokens). This is quite a limitation, since you probably can't fit everything you want in just under 75 tokens. Back in SD 1 days, in order to train the model efficiently NovelAI proposed a way to extend the prompt length to any arbitrary number by splitting the original prompt, calling CLIP multiple times and combining the embeddings. This was quickly adopted virtually everywhere after their model leaked, including base SDXL training.

However, again, various implementations differ and may handle edge cases ungracefully. For example what are you going to do if you reach your limit of 75 tokens IN THE MIDDLE OF A TAG?

[ ... open mouth, quad tails, raglan<EOS>] [<BOS>sleeves, shirt, short hair, shout lines ... ]

Currently under the hood many UIs silently switched to another version of it which does a better job at handling tags.

[ ... open mouth, quad tails<PAD><EOS>] [<BOS>raglan sleeves, shirt, short hair, shout lines ... ]

Training specifics

Training is putting the text through the model, seeing the results, and adjusting the model. During training, failing to provide captions resembling ones you are going to use during inference will result in an unusable model. Treat each caption as if it was a prompt for the model.

You should understand which features your trainer supports. Most trainers do not support emphasis in any way, so if for some reason you put direct prompts into the captions, you should remove emphasis. You should not escape any characters using backslash "\" either, if you did not enable emphasis.

Scraping

todo: describe specifics of each one and check if it even works

Scraping Boorus:

Editing

todo: describe specifics of each one and check if it even works
There are many standalone tools to edit datasets:

Some tools that may be useful:

Evaluation

todo: try to come up with a sensible way to evaluate the results, at least for some specific stuff
todo: borrow from https://github.com/spacepxl/demystifying-sd-finetuning
todo: fill the list of useful tools

Some useful tools:

Training options

todo: look through all trainers and consolidate available options

Beyond using premade trainers

todo: show how you can modify trainers

Edit

Pub: 13 Aug 2025 12:31 UTC

Edit: 01 Sep 2025 22:59 UTC

Views: 390