Stable Diffusion GUI Guide (Condensed Version)
See the retard guide for an information dump once you are comfortable with the Stable Diffusion.
This document was written primarily for my own reference to keep resources/information in a centralized place.
Optimal Stable Diffusion Setup
The most up to date and hightly contributed to WebUI for Stable Diffusion
- Download GitBash or use your terminal on Linux or macOS with Git installed
- Download Python 3.9+
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
in your terminal.- Download and drop all the models (found right below under the
Models
header) in the base directory. You can use SD 1.4 or the anime model, but not both at the same time. This must be done before going to step 7. - (Optional but highly recommended) Download and drop all of the upscaling/tuning models found below or you will lack those features and get get not found errors.
- (Optional) This reduces VRAM, and allows you to generate larger images or batches for a <10% loss in raw generation speed. Edit
webui-user.bat
and changeCOMMANDLINE_ARGS=
toCOMMANDLINE_ARGS=medvram
- Run
webui.bat
and be patient while it installs all of the dependences. - Go to https://127.0.0.1:7860 and have fun.
WebGUI Feature Showcase
See the showcase to get an idea of what the SD webui is capable of
https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
Models - Add to your base directory
Base SD Model
https://drive.yerf.org/wl/?id=EBfTrmcCCUAGaQBXVIj5lJmEhjoP1tgl
Waifu Diffusion v1.1 model (1.1 was taken down for "unknown" reasons, keeping for archival purposes)
https://mega.nz/file/uw0zRTLJ#yEtoptafu3RTFLCfHnDEJK1Mg_mHZOfEioqAtA4Q85s
WD 1.2+ can be found in their discord
https://discord.gg/APHsTQuXDT
NOTE: use one model or the other, and it RENAME to model.ckpt
and place it in the main directory
Upscaling/Tuning Models
RealESRGAN
- up to 4x upscaling
RealESRGAN_anime
- up to 4x upscaling for anime
GFPGAN
- eye/facial correction as SD is weak in that department
All models above are provided here https://mega.nz/folder/flVEwKIC#CQkFu93vMBUTJa3yYxpG-Q
Add these 3 files the base directory as well, NOT the ESRGAN directory, as it these models are not compatible.
Optimal Settings
Set the sampling method to k_euler_a
, although it should be default, with a step range of 20-50, keep in mind that it can produce highly varied results based on the number steps you use.
CFG is how much you want the model to follow the prompt. <10 recommended unless you want a lot of variation. See below for more on CFG settings and how it impacts your prompt output.
Feel free to play around with other sampling methods.
CFG (Classifier Guidance) Tuning
- CFG 2 - 6: Let the AI take the wheel.
- CFG 7 - 10: Let's collaborate, AI!
- CFG 11 - 15: This is a good prompt. Just do what I say, AI.
- CFG 16 - 20: DO WHAT I SAY OR ELSE, AI.
As mentioned before, don't go above 10-12 unless you have a REALLY good prompt that you've tested a lot. I've found 7-10 is a good starting point to figure out whether your prompt is good or not.
Prompt guide and getting good results
If you're not creative or need inspiration, use a prompt search engine like lexica or the SD model browser (details below)
Creating good prompts
Some considerations
- Adding sufficient detail and context to prompts
- Order of prompts
- Max length of prompts
- Prompts that SD knows
Adding sufficient detail to prompts
Detail is important...
Order of prompts
The order of your prompts from left to right will determine how much SD weighs each prompt. For example, your first prompt will have the highest weight, while your last prompt will have the least weight in the final image.
Max length of prompts
SD has a max length of prompts, and the webui will let you know when your prompts were truncated (I believe its 96 characters) --- VERIFY
Prompts that SD knows
SD does not have context on every word. It is trained on a datasest of art, and some words it won't recognized, as they were not in the original model. This could be obscure artists, characters, or concepts. I am not aware of a way to know what prompts it's not using, but if you put a character or artist weighed heavily to the front of your prompt, then you'll know if the context is there based on the output.
Prompt search engines
https://lexica.art - prompt/image search engine based on CLIP
...more to come
SD Model Browser (laion-aesthetic-6pls)
https://laion-aesthetic.datasette.io/laion-aesthetic-6pls
This is useful for finding individual sources and tags/sentences which the base SD model is based on. It is a fully explorable database of the images used as source material.
Search "booru" or similar to find anime tag sources, if that's your thing.
Seeds and creating variations on an original image
A seed is created for every image you generate and can be reused with the same prompt, modified prompt, or an entirely different prompt. The benefit of using seeds is that if you find an image you like, you can use that as a source and tune the final output you are going for instead of just continuing to generate images on the same or modified prompt.
VRAM Testing
Currently only tested widescreen images for specific VRAM usage. Will do 1:1 testing eventually...
Resolution | VRAM Usage (avg.) |
---|---|
832x256 |
7GB |
960x384 |
11GB |
1024x448 |
14GB |
1152x576 |
23GB |
You can also try portrait images with the above ratios.
On 1:1, I was able to max out around 896x896
on an RTX 3090, so use that as a reference (24GB VRAM)
I HIGHLY recommend using the largest resolution your GPU can handle to avoid upscaling artifacts later on once you have found a good prompt.
img2img testing/tuning
Coming soon
img2img inpainting
Coming soon
img2img outpainting
Coming soon
Alternative upscaling solutions
Upscayle - https://github.com/upscayl/upscayl - Nice GUI interface for upscaling based on RealESRGAN
Plugins
Photoshop plugin outpainting (closed beta atm)
Site - https://www.getalpaca.io/
Other SD Forks
WebUI Optimized
- more setup, optimized for low VRAM (not recommended for now due to setup complications, just use the main webui in low vram mode)
https://github.com/neonsecret/stable-diffusion
Other Resources
https://github.com/awesome-stable-diffusion/awesome-stable-diffusion
Disclaimer: This was not all written by me and includes a compilation of many sources.