LoRA Training Guide (Dataset Creation) – v1 – Feb 2025
This is a rentry formatted version of this excellent tutorial
- Requirements
- Overview
- Step 0: Determine LoRA Concept
- Step 1: Gather Training Data
- Step 2: Curate & Edit Dataset
- Step 2: Curate & Edit Dataset (cont.)
- IF IMAGE IS TOO SMALL, DON’T GIVE UP RIGHT AWAY!
- Step 2: Curate & Edit Dataset (cont.)
- Step 3: Caption Your Images
- Captioning Tools:
- Captioning Best Practices:
Requirements
- This guide is for training LoRAs locally
- This guide is for training loras on SDXL (not yet for video but soon one day)
- GPU VRAM >= 16GB
- Kohya SS (https://github.com/bmaltais/kohya_ss)
Overview
Training a LoRA is basically creating a mini-model, mapped on a larger checkpoint (base model).
The better the base model understands what it is you are trying to train, the better the results
Example: training NSFW LoRAs on SDXL base will tend to produce great results compared to training a NSFW LoRA on a porn-tuned SDXL base model.
There are (provisionally) 2 types of LoRAs:
- Character: people.
- Concept: ex: add detail, epiNoiseoffset, IP Adapter, body parts/body shapes
Some of the more complex Concept LoRas I do not know how to train, so we will stick to just body LoRAs
Step 0: Determine LoRA Concept
Sounds obvious, but make sure you have in mind exactly what you want to train. The AI is a tool. It only knows what you give it.
- The MOST important part about training a LoRA is building the dataset (training images). Hands down. 100%.
- You can have the best settings, but if the training images do not vary enough or are too low quality, the AI will simply train on what you give it (shit)
Step 1: Gather Training Data
Gather training images, We will be using BigLust v16 as the base checkpoint for this guide because IMO that's the best right now.
- Concept: at least 30 images. No more than 150 (arbitrary, IMO too many doesn't really add value, starts to blend together too much).
- Character: at least 35 images. No more than 150 (^same)
SDXL has these native resolutions (these are especially useful as we learn about 'buckets' during training):
- square: 1024x1024,
- widescreen: 1152x896, 1216x832, 1344x768, 1536x640
- portrait: 896x1152, 832x1216, 768x1344, 640x1536
BigLust v16 can create images in other resolutions, but these are just the default SDXL native resolutions
Gathering Process:
My personal process is to create an image folder template to work from (copy+paste for every new LoRA):
- Begin gathering all the photos. RAW photos, just gather everything.
- Save them to a sub-folder within this folder template called zimage
- Once you have all the images, start to evaluate them
- Are they too small?
- Are they too blurry?
- Are 2 images too similar to each other (backgrounds or other)?
- Are you missing anything else you want to add to the LoRA?

Remember: the LoRA is like a mini model. So anything you put into it will output it later during prompting. If you are making a LoRA of a character, you can add concepts of body parts that are not native to them, but you include in the training dataset anyway to make the LoRA know that those concepts are associated with that character (more later)
Step 2: Curate & Edit Dataset
Now we begin to become more selective with our images and edit them to get them eady for LoRA training.
- We're still in the folder structure working only on images, nothing yet with Kohya or captions.
When gathering data, make sure to remove watermarks, (crop/edit out), and avoid conflicting concepts (unless you want that).
Target Dataset Image Composition:
| Chart | Description |
|---|---|
![]() |
Characters: 1) 50% of the images include face and one upper body (waist up) 2) 20% of images are face closeup (great for inpainting) 3) 30% of remaining images are the rest of the body • ex. facing away, butt, side profile, distant shot, far away, full body with feet, etc • You can include a NSFW image into the dataset of ex. a nude pussy of another person ->• this will associate that concept with this character |
*These percentages are approximations but are generally a good rule of thumb for a balanced ratio of components for the LoRA to be flexible
| Chart | Description |
|---|---|
![]() |
Concepts: 1) 50% Are the core concept at varying distances 2) 50% are the concept with some variations - ex. different orientatios (front vs black) lighting & background variation, etc. - you can make this % much smaller if your concept is very precise/specific |
*These percentages are approximations but are generally a good rule of thumb for a balanced ratio of components for the LoRA to be flexible
Examples of Image Types:
Face with Upper Body (50%)

Face with Close Up (20%)

Remaining Images (30%)

Step 2: Curate & Edit Dataset (cont.)
Now we start to evaluate the images. Which ones make the cut?
- You can create another subfolder to start putting in A-tier images before you begin to edit them
- Editing includes photoshopping watermarks, tattoos, undesired pimples, etc.

Good Image Criteria:
- Image Quality
- Does this image have the desired quality you want? If not, do not include it in A-tier images.
- Image Variation
Does the image have a useful purpose?- Is it unique enough?
- Will it add value to the LoRa?
- If yes, add it to your A-tier folder of training images and begin the editing process
- Image Size
- The images should have at least one dimension (height or width) at least 768 px.
- I use 768 px because it's generally the smallest dimension I prompt during generation and it is a SDXL native resolution
IF IMAGE IS TOO SMALL, DON’T GIVE UP RIGHT AWAY!
You can resize an image in Photoshop (probably Gimp too) and still include it in your A-tier dataset.
This is especially useful if you don’t have that many images to begin with, and can’t afford to discard too many small ones.
Example:
Sample Image (600x800 px)

Open image in Photoshop, Image --> Image Size...

We want this image to be something within those SDXL native resolutions.
So let's change the height to something bigger, so both width and height are a little closer to those ranges.
Change width to (at least) 768 px. Height automatically adjusts:

Now, Image kinda sucks, how do we improve it?

In the Layers panel on the right
Right Click --> Convert to Smart Object

Then, Filter --> Camera Raw Filter...

Navigate to 'fx' tab
- Add noise
- Amount: 5 (preference)
- Size: 50 (preference)
- Roughness: 50 (preference)
You can also mess with Reduce Noise, Sharpen, and other editing settings to try and improve the quality

We do this image stretch and add noise fix to improve the overall aesthetic of the stretched image and improve LoRa quality:
- Most images contain a little bit of grain
- LoRa training uses 'buckets' that will group images into sizes, and making your training images close to default SDXL resolutions improve output substantially
- This is especially useful if you are stuck with smaller resolution RAW photos and have no large images to train on
Step 2: Curate & Edit Dataset (cont.)
Editing & Improving Photos:
Now we edit to remove unwanted artifacts.
As mentioned on page 1, when gathering data, make sure to remove watermarks (crop/edit out), and avoid conflicting concepts (unless you want that).
In our example image, we have OnlyFans watermark. Not good. Time to crop it out.
Here is the final image:

Aditional edits for this image included:
- Cropping the bottom watermark
- Resizing again to make sure width = 768 px (because it is a smart object, this is ok to do it again)
- Added some clarity to remove softness. Clarity = 10.
This image is now ready for training!
Additional Editing & Curating Tips:
Other editing methods include:
- Using Content-Aware to remove unwanted artifacts
- Using AI to remove unwanted artifacts
When removing unwanted artifacts, think about what you don't want to see...
For example, if you are making a large dataset of a conecept, consider removing all tattoos if they are easy to remove.
Otherwise, you will have tattoos come up which may not be the intended purpose of the LoRA.
Another tip is to vary the image aspect ratios.
- If you have some great pics, but the backgrounds are the same, you can crop one to be 1024x1024 (square) and another to be 896x1152 (portrait)
- This makes it so that when you prompt using the LoRA, it will not be super biased for specific backgrounds on specific resolutions or with specific tags/trigger words.
FOR BEST RESULTS, AIM FOR APPROXIMATELY 75-100 TOTAL IMAGES
Step 3: Caption Your Images
Next, we move all the A-tier images into the image folder.
Within the image folder, we create a subfolder called "1_(INSERT CELEBRITY NAME)":

Now we boot up Kohya SS to create .txt files for the images (captions).
To auto-caption images, we have a set of choices under the 'Utilities' tab. Each one comes with pros and cons:

Captioning Tools:
- BLIP Captioning (Beginner)
- Less detailed captions
- Easy to use & understand
- Keeps captions simple
- Requirres heavy edits
- WD14 Captioning (Recommended)
- More detailed captioning
- Uses less natural language and more brief/short tags
- Uses anime tagging system
- Generally requires adding 'Undesired tags' which can be hit or miss
Captioning Best Practices:
The captions to your images are the text that is translating what is inside of the image.
If you ONLY automate captions and do not manually review them, you will be stuck with the results of whichever vision system captioned it. Sometimes this is not ideal and misses out on important information.
Make sure to include a prefix for your tagging such as the celebrity name or a unique identifier that is not understood by the model (ex. belle delphine or b3lledelph1ine). Either one works, and it's more up to you on how you like to trigger your LoRA.
Example:
- When captioning an image of a person, you want to include tags of anything that is unique to that image, but omit words that are inherent to the subject
- For our female subect in the image to the left, id she ALWAYS has brown hair, then we woould not want to include tags such as "brunette hair" or "brown hair", but if she has multiple hairstyles and colors, then we WOULD want to include those tags
- A tag such as "nose" is generally inferior to a tag like "face" (because face includes nose), UNLESS the nose is unique to that image
Another best practice is to CAPTION THE EYES AND MOUTH!
- Auto-captions do not always capture tags for eyes and mouth, which can be super frustrating when generating images
- Examples include "makeup, natural, squinting, open mouth, closed mouth, smiling, smirking, laughing, teeth, lips, puckered lips" etc.

