Stable Diffusion Guide by Ninja <3

Guide Updated 10/31/2022. New UI is very feature rich. Please refer to github repo for showcase.

Image description

Please read all of it!

Prerequisites

To begin you'll need the following:

  • Python 3.7+:
    3.10+ is recommended and latest release works download here. Make sure to set python to PATH in installer.
  • Git:
    64bit Standalone Installer is what you need. Download here.
  • Stable Diffusion Model Checkpoints:
    Google drive Backup so you don't have to google it or sign up for an account on HuggingFace's Website. Download here. If you'd like other models they are linked below.
  • GFPGANv1.3:
    Download here.
    GFPGANv1.3 is used to upscale any faces that you create. Taking them from AI nightmare to beauty filter Instagram post depending on settings. More info can be found here.
  • Recentish Nvidia GPU With Latest Nvidia Drivers:
    Found here.
  • (OPTIONAL) Alternate Model with a focus on Anime and Manga:
    Found here.
  • (OPTIONAL) Another alternate model trained on mostly NSFW Manga/Hentai called Waifu Diffusion (Trained on Danbooru):
    Found here.
  • (OPTIONAL) Can't decide between Waifu Diffusion or Stable Diffusion? Here is a model with both of them merged!:
    HAS A HIGHER SYSTEM RAM REQUIREMENT 12GB MINIMUM. Found here.

To use other Stable Diffusion models simply place them in the ~/models/Stable-diffusion directory in the WebUI's install location. You no longer need to rename them or use command line args to access them. Now in the WebUI on the top left will be a drop down box that lets you swap between models.

  • (OPTIONAL) Additional ESRGAN Models for Upscaling:
    Found here.
    Lollypop is good for people/portraits.
    Remacri/Remacri_ExtraSmoother is good for landscapes.
    To install these simply places them in the folder labeled "ESRGAN" in the following directory ~/stable-diffusion-webui/models/ESRGAN They will be available to use in the "img2img" and "extras" tab.

More SD Models YAY!!!

Here is a nice repo with a lot of links and info on models that have been made by the community or leaked from SD/NovelAI. Found here. There are NSFW and SFW focused models available.

Installation

Create a folder in a directory where you want to install. (Total install varies from 10-20gb)

open Command Prompt

Type CD "PATH_TO_YOUR_FOLDER_HERE" (put the path to your folder in the " " ex CD "C:/Users/Ninja/yourfolder" )

Type git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

Place previously downloaded model.ckpt or other model into stable-diffusion-webui/models/Stable-diffusion directory

Place previously downloaded GFPGANv1.3.pth into the stable-diffusion-webui directory, next to webui.bat.

Run webui-user.bat from Windows Explorer (Double Click). Run it as normal user, not as administrator.

Follow instructions that are being displayed, if any.

The script will progress through installing dependencies for Stable Diffusion and the WebUI. The process, specifically for the torch install, can take a while.

When it's done and running it'll display a ip address and port (x.x.x.x:xxxx). Navigate to that address in your browser of choice.

Done!

If you ever want to update you can either do so manually or you can automate it.

Manual Updating:

open Command Prompt

Type CD "stable-diffusion-webui directory" (ex: "CD C:/Users/Ninja/stable-diffusion-webui")

Type git pull

It will then update, you may have to confirm some things in the terminal.

Automatic Updating

Open webui-user.bat in a text editor.

Before the line call webui.bat add git pull and save.

Now every time you launch the WebUI via webui-user.bat it will pull whatever newest updates are available.

Now that you are up and running i'd highly recommend checking out the features showcase on the github page for the WebUI. Found here.

Making images, especially larger resolution ones, WILL make your GPU heat up a lot. I recommend either turning your fans to 70-100% or having a curve that ramps up at ~60-70 degrees. More resolution = more VRAM and more that the model has to diffuse.

Trouble Shooting Common Issues

According to reports, installation currently does not work in a directory with spaces in filenames.

if your version of Python is not in PATH (or if another version is), edit webui.bat, change the line set PYTHON=python to say the full path to your python executable: set PYTHON=B:\soft\Python310\python.exe. You can do this for python, but not for git.

installer creates python virtual environment, so none of installed modules will affect your system installation of python if you had one prior to installing this.

webui.bat installs requirements from files requirements_versions.txt, which lists versions for modules specifically compatible with Python 3.10.6. If you choose to install for a different version of python, editing webui.bat to have set REQS_FILE=requirements.txt instead of set REQS_FILE=requirements_versions.txt may help (but I still reccomend you to just use the recommended version of python).

if you feel you broke something and want to reinstall from scratch, delete directories: venv, repositories.

Running on 4GB VRAM (And less!)

These parameters are also useful for regular users who want to make larger images or batch sizes!
It is possible to drastically reduce VRAM usage with some modifications:

Open webui-user.bat in a text editor

After COMMANDLINE_ARGS=, enter your desired parameters:

Example: COMMANDLINE_ARGS=--medvram

Done!

If you have 4GB VRAM and want to make 512x512 (or maybe larger) images, use --medvram.

If you have 2GB VRAM, use --lowvram.

If you are getting 'Out of memory' errors on either of these, add --always-batch-cond-uncond to the other arguments.

If you get a green/black screen instead of generated pictures, you have a card that doesn't support half precision floating point numbers (known problem on 16xx cards):
You must use --precision full --no-half in addition to other flags, and the model will take much more space in VRAM

Make sure to disable hardware acceleration in your browser and close anything which might be occupying VRAM if you are getting out-of-memory errors, and possibly remove GFPGAN (if you have it)

Otherwise, these flags can be use to generate larger resolution images on cards with a lot of VRAM.

Some Info About Prompts

  • Your prompt must be 75 tokens (words or symbols or emojis) or less, anything above will result in "prompt truncated after tokenization"
  • Your prompt is case insensitive
  • There's about 30,000 tokens understood by the AI, this means it won't know about some weird word unused since the 1600s
  • A token is roughly a word, a punctuation, or a Unicode character
  • Same prompt, same seed, same modifiers will end with the same result
  • To make variations of an image, it's recommended to keep the seed and slightly alter the prompt / modifiers

Here are some prompts that you can try to see what kind of result you get!

A prompt that I really like (Possible character portraits):

male model, muscles, chest hair, seated pose, hyperrealist, sharp focus, cinematic lighting, digital painting by gaston bussiere, j. c. leyendecker, craig mullins, alphonse mucha

Image description
Settings used 35 Sampling Steps, CFG Scale of 9.5, Euler A 512x512

A prompt that I like that includes camera information and lighting effects as modifiers:

award winning photo of an empty urban street in tokyo japan on a rainy night , Hideyuki Kikuchi , Canon 1DX Mk III , Nikon D5 , screen space reflections, ray traced global illumination, ambient occlusion

Image description
Settings used 35 Sampling Steps, CFG Scale of 9.5, Euler A, 512x512

And example of combining artists in a prompt:

illustration of markets in a cyberpunk tokyo, sharp focus, sharp details, by Makoto Shinkai + Studio Ghibli

Image description

Here is a grid showing the effect of the artists on the prompt and combining them. Done on a different seed:
Image Description
Settings used 35 Sampling Steps, CFG Scale of 9.5, Euler A, 512x512

A prompt that includes camera lens, film, and has a focus on detail:

photo of a bowl of apples, ultra detailed, hyper realistic, photo booth, light blue background, soft skylight, soft shadows, high depth of field, bloom, fibonacci, tilt-shift, f 1.9, 35mm

Image description
Settings used 35 Sampling Steps, CFG Scale of 9.5, Euler A, 512x512

A prompt of a comfy D&D cabin using higher steps. Also features artist combining and using weights (explained on the features page of the repo for this webui):

d&d fantasy winter pine tree forest background (cabin). by Gabriele Deli'Otto + Jeff Easley + Larry Elmore + William O'conner + Tyler Jacobson + Gerald Brom + Heonhwa Choe + E.M. Gist + artstation + deviantart, soft studio lighting, ultra realistic

Image description
Settings used 50 Sampling Steps, CFG Scale of 10, Euler A, 512x512

Example Of Using img2img

Let's start by going to the img2img tab

Image description

We'll click on the left input box labeled "Image for img2img" and select our starting image. This will be ours.

Image description

First we'll run it through img2img with this prompt and these settings:

((cute anime)) girl with (short bright blue hair) wearing a black shirt and puffy grey pants, by Makoto Shinkai + Studio Ghibli + Shichiro Kobayashi, white background

Image description
After entering the prompt and selecting the image and settings press generate. Note the high value for denoising, the higher this value the less of the original image will remain.

First pass results are:

Image description

The results look like what we want so we'll take note of the seed shown in the dialog box below the output box. We'll then feed this generated image back into img2img by using the "Send to img2img" button with these settings. (A note about this step. The actual amount of steps (increments of generation) that will happen when using img2img is [(Sampling steps) * (Denoising Strength)] so this image will only be processed 8 times in this example. )

Image description

The result is a slightly altered version. Because we set our denoise strength and steps low the image was only altered slightly. :

Image description

This is just a basic example of what can be done with img2img. You can also img2img select areas of an image with the inpainting button labeled "Inpaint part of an image". Simply draw on the image to mask the area that will be run through img2img. Always remember to provide a prompt or it will not really alter your image.

Guides On Some Features

Shared by kind Anons

Using the inpainter/outpainter:

Using SD Upscale:

Using textual inversion:

Got a lot of VRAM or access to a Google Collab or other cloud compute? Here is a guide for training your own models:

Hypernetworks and how to train/use them:

Another really good 4chan guide that I took some parts from. If my guide is unclear or misses any questions you have I highly recommend checking it out:

Bonus Resources

Here is a chart that shows the difference between step counts with different samplers that are available:
Image description

Plugin that allows you to use Stable Diffusion in the open source art software Krita, generating anything you can imagine into your image right in app:

How to booba (tips for standard SD model and waifu/trinart model):

A website that lets you search terms against the images in the dataset for the standard model (Loads slowly):

Websites that show what artists are trained into the standard model:

Websites that helps you visualize prompt terms without rendering them first:

Archives of a lot of good info regarding Stable Diffusion:

Another archive/wiki

I recommend checking out the github page of the webui/install we are using in this guide:

Also this page showcases this webui/install's features:

Absolutely insane SD log/hoard of info:

Collection of merged models:

Create 3D models to base img2img on (if you don't want to use blender or paint):

More resources:
https://github.com/bes-dev/stable_diffusion.openvino - (CPU Version of SD)
https://github.com/xinntao/Real-ESRGAN (An amazing upscaler that is featured in this. Downloading this would let you upscale outside of the SD webui)
https://github.com/n00mkrad/cupscale (A nice GUI interface for ESRGRAN/RealESRGRAN works on photos, gifs, and videos)
https://github.com/nagadomi/waifu2x (Another amazing upscaler GUI for photos, gifs and videos, tons of options and upscalers, annoying popups to promote premium version)
https://www.youtube.com/watch?v=f3oXa7_SYek (Video talking about what Stable Diffusion is)
https://github.com/CompVis/stable-diffusion (Base code that has been modified/adapted)
https://rentry.org/ayymd-stable-diffustion-v1_4-guide - (Windows DirectML Adaptation Guide for AMD GPUs (CPU is almost as fast so not recommended))
https://rentry.org/sdamd - (Updated AMD guide from an Anon, works on linux only I assume.)
https://www.youtube.com/watch?v=d_CgaHyA_n4 - (AMD Linux Docker Guide)
https://rentry.org/Stable-Diffusion-Training - (Guide on training Requires 24gb VRAM min. Can be run in cloud compute services like collab)
https://rentry.org/sd-nativeisekai - NATIVE AMD LINUX GUIDE

I'll try to keep this up to date <3

Edit
Pub: 07 Sep 2022 17:16 UTC
Edit: 01 Nov 2022 04:49 UTC
Views: 924