LoRA training tutorial for artist style
This is a personal tutorial which describe an example of my actual workflow to train a LoRA model of an artist style for Stable Diffusion. For this tutorial we will try to train a LoRA model on Takasugi Kou artist style on Anything v3 checkpoint model.
For this training we will need to use 3 different softwares :
- An image editor like Gimp, Photoshop or Photopea for images source editing and rescaling. I'm using Gimp for this tutorial.
- AUTOMATIC1111 stable-diffusion-webui for images source editing and upscaling, images captioning process and LoRA models testing.
- and Kohya's GUI for the LoRA model training itself.
1.1 Images selection
It is the most important step of the training. For a good training, you must have a really good source of images. Those images must reflect the most possible the style you want to be learned, but images must also be the most diversified as possible : not the same character or subject on all the images; not the same point of view, not the same backgrounds, not the same poses, etc... The only things that the images must have into common, one with the others, is the artist style.
Images must have good resolution (no pixelated image, no blurry image), have as possible no text on them, and the subject must be as centered as possible on the image. For the training we must prepare 512 x 512 pixel square images. In the case of an artist style, I suggest to have between 50 to 100 images.
The best way to obtain artists images is to visit their official homepage, their social media pages (Twitter, Instagram), some artist dedicated websites (DeviantArt, Pixiv, ArtStation) or any obcsure fans boards and websites (e-Hentai, Rule 34, F95zone, etc...).
1.2 Images croping
LoRA model training must be done on square images. So we will have to crops those images to obtain square images. To do this I'm using Gimp rectangular selection tool, and fix the proportion to 1:1
. Then I select on the image a square area. You can use the highlighting option to better see the selected square area. When it is done, go to Image menu entry and select Crop to Selection option and save your image as a .PNG file.
1.3 Images editing
Sometime it is not possible to find clean images of an artist. Images could contain some text or comic speech bubbles, and it is important to remove them. If not, the model will be trained with those texts, an when you will use the LoRA model it will have risk to observe text appearing on images.
To remove those texts from images, I'm using 2 differents methods : clone and heal tools in Gimp and/or inpaint option in AUTOMATIC1111 stable-diffusion-webui. Clone and heal tools in Gimp are usefull when you want to remove text presents on uniform areas or backgrounds. The clone tool is used to duplicate a surface area, when the heal tool is used to smooth the transition between 2 differents areas. I usually use these options one after the other. See example below :
→
Inpaint option in AUTOMATIC1111 stable-diffusion-webui is usefull when you want to remove text present over non uniform area. In that case you have to select the img2img tab into AUTOMATIC1111 stable-diffusion-webui, then select the inpaint option. Upload the image you want to modifiy, and with the inpaint tool, hide with the black color, areas where you want the text to be removed. Indicate nothing into the prompt and then click on the Generate button.
It is possible that you do not generate a clean image at the first time. If it is the case, click again on the generate button. If you need to clean several parts of the image, you will get generally better results by cleaning those parts successively instead than all of them at the same time. When your image is clean, click on the Save button to save your image.
1.4 Images enlarging and rescaling
Sometime it is also not possible to find images with 512 pixels resolution of an artist. In this case I do not recommend to use the Scale Image option into Gimp to enlarge those images, because you will certainly obtain blurry results like below :
→
In this case it will be better to use the resize options into AUTOMATIC1111 stable-diffusion-webui. To do this, you have to select the Extra tab into AUTOMATIC1111 stable-diffusion-webui, then select the Scale to tab. Set Width and Height values equal to 512
. For Upsaler 1 option, select the SwinIR_x4
value and finally click on the generate button. It will automaticaly enlarge your image to 512 pixels without create any blurry effect on it as the example below :
To all the other images, larger than 512px, you can rescale them using Gimp. Go to Image menu entry and select Scale Image option. Set Width and Height values equal to 512px
and then click on Rescale.
Once we have collected between 50 to 100 source images at 512 x 512 pixels resolution. Put them all into the same folder and then we can go to the next step of the training : images captioning.
Goal of this images captioning step is to create a prompt description, using tags, of each image source. Those tags will be then used, with the images, during the training of the LoRA model.
2.1 Captioning parameters
To do this, you have to select the Train tab into AUTOMATIC1111 stable-diffusion-webui, then select the Preprocess images tab. For the Source Directory field enter the path to the directory containing your images sources. For the Destination Directory field enter the path to the directory where will be saved the caption text files and the source images. Check Use BLIP for caption and Use deepbooru for caption options. Check the Use deepbooru for caption only if you want to train you LoRA model on a checkpoint using Danbooru tags like Waifu-Diffusion, Anything v3, NAI Diffusion or derivatives checkpoints. When it is done, click on the Preprocess button.
After few seconds AUTOMATIC1111 stable-diffusion-webui will create caption text files and copy source images into the destination directory. Each caption text file containing a description on the coreesponding image, under prompt format bases on BLIP and deepbooru tagging method. Example :
a cartoon picture of a woman with big breast holding a knife in her hand and looking at the camera, 1girl, black_hair, breasts, brown_eyes, clothes_lift, door, hair_bun, large_breasts, lifted_by_self, looking_at_viewer, mature_female, nipples, no_bra, pink_shirt, shirt_lift, smile, solo, upper_body
Once we have we have generated all the caption text files, we can go to the next step of the training : the LoRA model training.
For the LoRA model training we are going to use the Kohya's GUI software as actually AUTOMATIC1111's dreambooth extension generate trained LoRA models that are not compatible with AUTOMATIC1111 stable-diffusion-webui.
3.1 Learning parameters
Before start to train the model we need first to estimate how many training steps will be necessary to train the model. This number of steps, according to my actual experience, could vary between 5000 to 16000 steps. This variation depends of the complexity of the artist style we want to train (more it is simple, less we will need training steps), and how different the artist style is from the default style of the checkpoint model the want to do the training on (more it is close, less we will need training steps). I generaly set this number of training step around 20000. The number of step is defined by this formula :
number of steps = number of images x number of epoch x number of repeat
First download this lora_gtsuyastudio_v3.json file and save it on your computer. This JSON file contain all the training parameters that I use for my LoRA models trainings. To upload thoses training parameters, into Kohya's GUI, select the Dreambooth LoRA tab into Kohya's GUI. Then click on the Configuration File menu and select the lora_gtsuyastudio_v3.json file. On the Source Model tab, click on the page icon located at the right of the Pretrained model name or path field and select the checkpoint model on which you want to train the LoRA model. For this tutorial it is Anything v3.
For this tutorial, we actually have 100 sources images. The epoch number is set by the lora_gtsuyastudio_v3.json configuration file and is equal to 100. So according to the previous formula, if we want a number of steps around 20000, so we need to have a number of repeat equal to 2. To fix with number of repeat, we will have to rename the name of the folder containing the caption text files and source images by adding at the begining of the folder name the number of repeat followed by the underscore character. So, for example for this tutorial, caption text files and source images are located into folder D:\Stable Diffusion\training\takasugikou\process
. We will rename the folder process to 2_process. So, caption text files and source images are located into folder : D:\Stable Diffusion\training\takasugikou\2_process
. When it is done, we go back to Kohya's GUI and then click on the Folders tab. Then for Image folder field select the parent folder of where are located the caption text files and source images. For this tutorial it is D:\Stable Diffusion\training\takasugikou
.
Then for Output Folder field indicate where you want the LoRA models to be saved during the training (D:/Stable Diffusion/training/lora
for this tutorial). Indicate into the Model output name field under which name you want to save the LoRA models (lora_takasugikou_a3
for this tutorial). Finally, into the Logging folder field Indicate where you want the log file of the training to be saved (D:/Stable Diffusion/training/logs
for this tutorial).
3.2 LoRA Model training
When it is done, click on the Train model button to start the training. As we are using a high number of learning step, the training will take several hours depending of the power of your graphic card. Once it is done Kohya's GUI training process will have generated 100 safetensor LoRA model files into your output folder. It is now time to test all those safetensor LoRA model files to found which one is the more close to the artist style. And that's the next step.
To test all the safetensor LoRA model files we are going to use AUTOMATIC1111 stable-diffusion-webui. First we are going to copy/paste all the safetensor LoRA model files of the output folder into the AUTOMATIC1111's lora folder (stable-diffusion-webui\models\lora). All the safetensor LoRA model files are numbered from 000001 to 000099 except one which have no number. It is the last generated safetensor file (lora_takasugikou_a3.safetensors
for this training). Rename it by adding -000100 (as lora_takasugikou_a3-000100.safetensors
for this training).
4.1 Set testing parameters
Then, in AUTOMATIC1111 stable-diffusion-webui, select the txt2img tab. Select the Stable Diffusion checkpoint you want to use (Anything v3 for this tutorial). Write simple Prompt and Negative prompt. I personally use : portrait of a beautiful woman, upper_body, realism
as prompt, and low resolution, bad quality, bad anatomy
.
as negative prompt. Click on Show Extra Network icon, select Lora, click Refresh button and select the first generated safetensor LoRA model files (lora_takasugikou_a3-000001 for this tutorial). Into the prompt to edit the LoRA weight value 1
to 1.0
(<lora:lora_takasugikou_a3-000001:1>
to <lora:lora_takasugikou_a3-000001:1.0>
) to add a digit to the weight. Click on the Close button. Set the sampling steps to 28. Set the Width to 344
and the Height to 512
. Set the CFG Scale to 13
. Check Hires. fix option. Select Latent
entry for Upscaler option. Fix the Upscale by at 1.25
.
Then into Script option select X/Y/X plot entry. As X type select Prompt S/R and write 001,010,020,030,040,050,060,070,080,090,100
as X Values. As Y type select Prompt S/R and write 1.0,0.9,0.8,0.7,0.6,0.5
as Y Values.
When it is done select, click on the Generate button. Stable diffusion will generate a grid with the number of the LoRA model training steps as X axe and the weight of the LoRA model as Y axe.
4.2 Set new testing parameters
Looking at the obtained images grid, we have to found in which area we observe the images the most close to the artist style. We have also to found for which training steps the model is under trained or over trained, or for which model weight the model is not enough or too strongly applied.
In this tutorial example, learning steps below 20 seems not enough strong. Model weight of 1.0 seems too strong (colors looks over-saturated) and model weight of 0.5 seems too weak. Following those informations we can change the testing parameters : Set 030,035,040,045,050,055,060,065,070,075,080,085,090,095,100
as X Values and 0.9,0.8,0.7,0.6
as Y Values. We need also to change the prompt in consequence and could modify it slighty to render different images : portrait of a beautiful woman, upper_body, realism, outdoor <lora:lora_takasugikou_a3-000030:0.9>
Then we click on the Generate button again to generate a new images grid. Reproduce this steps, several time, until you found the leaning step and the model weight giving you the best result. As you reproduce thoses tests, it will be more an more difficult to differentiate good and bad images. If it happens, do not hesitate to retry testing with same parameters but with a different prompt.
4.3 Test and retry, test and retry, etc...
According to the previous result we can set new testing parameters and new prompt.
Prompt : portrait of a beautiful woman, upper_body, large_breasts, cleavage, red_dress, realism <lora:lora_takasugikou_a3-000045:0.85>
X Values : 045,049,053,057,061,065,069,073,077,081,085,089
Y Values : 0.85,0.8,0.75,0.7,0.65
Try to found bad image is difficult on the previous result. We can slightly ajust the testing parameters and use a prompt containing more description (with more tags).
Prompt : portrait of a beautiful woman, brown_hair,upper_body, large_breasts, streetwear, realism <lora:lora_takasugikou_a3-000044:0.8>
X Values : 044,047,050,053,056,059,062,065,068,071,074
Y Values : 0.8,0.75,0.7,0.65,0.6
Then test, test, and retest... until you found the best training step with the best model weight. For this tutorial I have found that a training step of 61 and a weight of 0.65 give me the best results.
This Takasugi Kou style LoRA model can be download on Civitai website.