--ULTIMATE GUI RETARD GUIDE--

UNIFIED

(9/9) This guide is no longer updated!

Please visit https://rentry.org/voldy for the latest guide with new features

---NEW FEATURE SHOWCASE & HOWTO---
Notable: Inpainting/Outpainting, Live generation preview, Tiling, Upscaling, <4gb VRAM support, Negative prompts

Special thanks to all anons who contributed
Note: In active development, there may be some bugs

What does this add?

Gradio GUI: A retard-proof, fully featured frontend for both txt2img and img2img generation
No more manually typing parameters, now all you have to do is write your prompt and adjust sliders
ESRGAN Upscaling (NEW): Boosts the resolution of images with a built-in RealESRGAN option
Mask painting (NEW): Powerful tool for re-generating only specific parts of an image you want to change
Loopback (NEW): Automatically feed the last generated sample back into img2img
Prompt Weighting (NEW): Adjust the strength of different terms in your prompt
GFPGAN Face Correction: Automatically correct distorted faces with a built-in GFPGAN option, fixes them in less than half a second
Multiple K-diffusion samplers: Far greater quality outputs than the default sampler, less distortion and more accurate
CFG: Classifier free guidance scale, a feature for fine-tuning your output
Memory Monitoring: Shows Vram usage and generation time after outputting.
Word Seeds: Use words instead of seed numbers
Launcher Automatic shortcut to load the model, no more typing in Conda
Lighter on VRAM: 512x512 img2img & txt2img tested working on 6gb but is also possible on 4gb (see under links/notes/tips)

Guide

(Updated 8/29) Alternate Prepack Installer available Here
^(Use Megabasterd for downloading large MEGA files without an account)^
Alternate guide for Linux users available Here
(Basic) CPU-only guide available Here
Japanese guide here 日本語ガイド

GUIDE START

Step 1: Git clone or download the WebUI repo HERE and extract

Step 2: Download the 1.4 AI model from huggingface (requires signup) or HERE or using torrent magnet
It weighs about 4 GB.
(NEW 9/7) Alternate 1.4 Waifu model trained on an additional 56k Danbooru images HERE (mirror)
(torrent magnet)
(Note: Uncompressed, 3 GB larger than normal model)
comparison

Step 3: Rename your .ckpt file to "model.ckpt", and place it in stable-diffusion-webui-master/models/ldm/stable-diffusion-v1

Step 4: Edit environment.yaml and change 'name' in line 1 from "ldm" to "ldo"
(This is to prevent conflict with the existing "ldm" environment/dir)

Step 5: Download Miniconda3 Python 3.8 HERE.

Step 6: Install Miniconda in the default location. Install for all users.
Uncheck "Register Miniconda as the system Python" unless you want to.
Windows 7 note: The Python 3.8 version can actually run there, but the installer fails halfway through, because it uses Python 3.9 DLL for some internal workings. You can use Windows 10 in a VM to finish the installation and then transfer the Miniconda3 folder to Windows 7.

Step 7 Run "webui.cmd" from /stable-diffusion-webui-master
Wait patiently while it installs dependencies (will take up about 14 GB locally) and does the first time run.
It may seem "stuck" but it isn't. It may take up to 10-15 minutes.

And you're done!

Important info about Latent Diffusion (quality resource-heavy upscaling)

Even though LDSR is technically listed along with other dependencies, WebUI won't be able to use it because both the code and the AI model are missing. To manually add it:
1. Run webui.cmd at least once to generate the necessary folders
2. Download latent-diffusion-main.zip,
extract it to /src and rename the folder to latent-diffusion.
3. Download project.yaml and last.cpkt and rename last.ckpt to model.ckpt.
4. Place both project.yaml and model.ckpt into /src/latent-diffusion/experiments/pretrained_models/
5. Run webui.cmd and you shouldn't see any errors at the start and in usage!

ESRGAN and GFPGAN support

  • GFPGAN (Face correction)
    1. Run webui.cmd at least once to generate the necessary folders
    2. Download the 332 MB GFPGAN pre-trained model
    place it in /src/gfpgan/experiments/pretrained_models/
    3. Run webui.cmd and GFPGAN options should be available!
    Note: the first time you use this function, the WebUI will automatically download 2 AI model weights totaling 185 MB.
  • ESRGAN (Upscaling)
    1. Run webui.cmd at least once to generate the necessary folders
    2. Download the 64 MB RealESRGAN_x4plus.pth and the 17 MB RealESRGAN_x4plus_anime_6B.pth
    place them in /src/realesrgan/experiments/pretrained_models
    3. Run webui.cmd and ESRGAN options should be available!
    Note: If you plan on running with 4gb, it is recommended to not add GFPGAN and ESRGAN support due to mildly raised memory usage

Usage

  • Open webui.cmd and wait
  • After loading the model it should give you a LAN address such as 'http://127.0.0.1:7860'
  • Open your browser and enter the address
  • You should now be in an interface with txt2img and img2img tabs
  • Have fun!
    Note: You might get "prefix already exists: ldo" when running webui.cmd. This is not an error. It just means you already installed the environment when running the script for the first time.

LINKS/NOTES/TIPS

-----LINKS-----

-----RUNNING ON 4GB----- It is possible to run the Stable Diffusion webui on 4gb Vram with some modifications:

  1. Edit /scripts/relauncher.py in your preferred text editor
  2. Change line 8 in relauncher.py FROM "python scripts/webui.py" to the following:
    "python scripts/webui.py --optimized" and save
    (This tells it to optimize Vram by generating incrementally, it should be noticed that this sacrifices generation speed)
  3. Launch webui.cmd like normal
  • If you are still getting an 'Out of Memory' error:
    delete or rename the ESRGAN and GFPGAN models (so they don't load into memory) and relaunch
    (You can use them both as external programs anyway anyway)
  • If your output is a solid green square (known problem on GTX 16xx):
    Add --precision full --no-half to the launch parameters above, it should look like this:
    "python scripts/webui.py --precision full --no-half --optimized"
    Unfortunately, the full precision fix raises ram use drastically so you may may have to moderately reduce your output to 448x448 if on 4gb

-----Running Online-----

  1. Edit /scripts/relauncher.py in your preferred text editor
  2. Change line 8 in relauncher.py FROM "python scripts/webui.py" to the following:
    "python scripts/webui.py (options)" and save
  • Use --share option to run online. You will get a xxx.app.gradio link. This is the intended way to use the program in collabs.
  • Use --listen to make the server listen to network connections. This will allow computers on local network to access the UI, and if you configure port forwarding, also computers on the internet.
  • Use --port xxxx to make the server listen on a specific port, xxxx being the wanted port. Remember that all ports below 1024 needs root/admin rights, for this reason it is advised to use a port above 1024. Defaults to port 7860 if available.

-----TROUBLESHOOTING-----

  • If your previous installation is missing anything referenced in this guide, it may be best to start from scratch with the new repo
  • "I keep getting X not found!":
    You may have a different conda installation path:
    If your conda installation is somewhere that isn't \Programdata\miniconda3, adjust the path in webui.cmd accordingly
  • If you want to delete your environment for reinstallation, run "conda env remove -n ldo" from Miniconda
  • Double check that your environment.yaml file says "ldo"
  • If you're on Linux, just run python scripts/webui.py directly instead of using the .cmd
  • If your output is a jumbled rainbow mess your image resolution is set TOO LOW
  • Having too high of a CFG level will also introduce rainbow distortion, your CFG shouldn't be set above 20
  • If you are upgrading from an old environment which doesn't meet current dependencies (such as waifu-diffusion),
    Delete all folders inside /src before running webui.cmd
  • (Fixed) If your generations are unusually slow, disable hardware acceleration in the browser that is running webui
  • On older systems, you may have to change cudatoolkit=11.3 to cudatoolkit=9.0 in the environment.yaml file
  • This guide is designed for NVIDIA GPUs only, as stable diffusion requires CUDA cores.
    AMD users should try https://rentry.org/sdamd

-----TIPS-----

  • You can drag your favorite result from the output tab on the right back into img2img for further iteration
  • The k_euler_a and k_dpm_2_a samplers give vastly different, more intricate results from the same seed & prompt
  • Unlike other samplers, k_euler_a can generate high quality results from low steps. Try it with 10-25 instead of 50
  • If you have more VRAM but are still forced to use the optimized parameter, you can try --optimized-turbo for a faster experience
  • The seed for each generated result is in the output filename if you want to revisit it
  • Using the same keywords as a generated image in img2img produces interesting variants
  • It's recommended to have your prompts be at least 512 pixels in one dimension, or a 384x384 square at the smallest
    Anything smaller will have heavy artifacting
  • 512x512 will always yield the most accurate results as the model was trained at that resolution
  • Try Low strength (0.3-0.4) + High CFG in img2img for interesting outputs
  • You can use Japanese Unicode characters in prompts
  • You can prune a v1.3 weight model using "python scripts/prune.py" in waifu-diffusion-main
    Pruning shrinks the file size to 2gb instead of 7. Output remains largely equivalent
    Comparison- https://i.postimg.cc/ZRKz4tJv/textprune.png
  • (Prune.py does not work on the new model, and does not matter as v1.4 is less heavy than v1.3 )
  • You can run GFPGAN and ESRGAN on your CPU instead of GPU by added the following parameters: --gfpgan-cpu --esrgan-cpu
    (This does not work for everyone and may have errors)

--OLD MODEL--
The original v1.3 leaked model from June can be downloaded here:
https://drinkordiecdn.lol/sd-v1-3-full-ema.ckpt
Backup Download: https://download1980.mediafire.com/3nu6nlhy92ag/wnlyj8vikn2kpzn/sd-v1-3-full-ema.ckpt
Torrent Magnet: https://rentry.co/6gocs

--OLD GUIDE--
The previous guide (replaced as of 9/9/22) is here: https://rentry.org/GUItard

RENDER TIME BY GPU (50 steps) Time
SAMPLER COMPARISON Sampler Comparison

Edit Report
Pub: 09 Sep 2022 17:28 UTC
Views: 180