Stable Diffusion spoonfeed installation guide

Back to Main Page
Alt Tag Image by anon

Last edit: 17/07/2024

TODO:

None

Updates:

Edit: 17/07/2024
Forge announced that they will be discontinuing it's current direction as a performance WebUI and instead focus on experimental changes. However, now we have a maintained and developed fork of Forge called reForge that implemented most new features. Rejoice! Read the installing forge section below to read how to switch to it.
Edit: 18/08/2024
Forge has started development again, mostly focusing on new features. Some promising stuff, I keep a main install on reForge and a seperate install for Forge to do Flux stuff in. It's generally too unstable right now to recommend for most purposes. The new inpaint UI looks neat.

What hardware do I need?

Nvidia GPUs are the primary supported manufacturer. AMD has support, but is harder to set up. This guide may guide AMD users towards a functioning install, but more dedicated AMD guides are FAR preferable. You WILL run into issues.

The most important variables are as follows:

1st VRAM Size:: The primary decider in generation size. The more VRAM, the bigger your images can be, or the more you can generate at once. Once VRAM is saturated, your model will be offloaded onto RAM, which is far far slower.
For SDXL: 6GB minimum, 8GB average, 12GB good. Anything above is luxury for most purposes. I've seen people manage with as low as 3GB, but you will have to use tricks and can expect long generation times. Various optimizations in recent months have helped to gradually push the VRAM requirements down, which is cool.

2nd Card speed: Your actual cards "strength". Somewhat self explanatory as regular gaming benchmarks more or less apply. If VRAM isn't overloaded this will be your primary decider for generation speed.

3rd RAM size: The speed of your RAM isn't as relevant as available size. IF your model offloads to RAM, you better have enough to load it. To reiterate, once your VRAM is full, even if only small amounts offload to RAM, your generation speed WILL exponentially take hits.
More RAM is still useful in some non essential things you might want to play with, like refiners.

I highly highly recommend the Nvidia 3060 12GB as a budget or midrange card. With 12GB of RAM it consistently beats cards twice it's pricepoint. The closest competitor I would consider is the Nvidia 4070 12GB, which should perform noticably faster.

Recommendations from the thread:

A few actual card recommendations:
>3060 12GB
Best budget option
>3090 used
Best budget 4090, get this if you're into training and/or chatbots. Only downside is low power efficiency compared to 4000 series
>4060 ti 16
"Budget" 16gb VRAM, faster than the 3060 by a fair amount, shitty pricepoint for non-SD stuff
>4070 super
Best non-budget price/performance ratio, good for gaming too, 12GB VRAM
>4070 ti super
Slightly faster than a 3090, lower TDP, but only 16GB VRAM. If you can snag a 3090 around the same price and have a good PSU go for that instead
>4090
If you're considering this, you obviously have enough cash to burn

Benchmarks for several cards

Additionally you will require a bunch of free hard drive space: I recommend at least 20-40GB Hard Drive space for comfortable genning. This can get very large very quickly from downloaded models and generated images.

What Frontend?

Nowadays you get several choices:

WebUi (Automatic1111) WebUI (reForge) ComfyUI
+Widely used and supported + WebUI with ComfyUI speeds +Fast
+Relatively simple setup + Currently just a better A1111 Webui +Tons of individual support, gets cutting edge features first
-Slowest generation times +-Official support ended, maintained by the community! +- Harder to grasp but actually teaches you how things function under the hood.
-Hardest to set up and more importantly understand
StabilityMatrix Invoke Online Generators
+Basically a frontend to install other frontends +Nicest looking UI +- Not the focus of this guide. Keep in mind many charge as much in ~3months as the entire pricepoint of a 3060. Most are straight up ripoffs. CivitAI somewhat has merit as the on-site generator let's you use most models on the site, and the lora training features remain popular, even for local genners.
- Highly discouraged, paywalls access to free software as of late +- Aimed at professionals. Great UI, lacks in features. NAI(NovelAI) is one service that's genuinely good. Throughout AI's young history, NAI had its moments where it straight up beat local image generation. Currently (01/01/24) its image gen looks pretty damn good, and is pretty damn smart. Paid service, but fair pricing. Personally I used to pay for several months for the LLM features before image gen was even a thing.

TL:DR
At the time of writing (17/07/2024), Forge, specifically reForge remains the best general option due to combining the ease of use of the WebUI, with the speed and performance comparable to that of ComfyUI. As long term goals for Forge have shifted towards experimental changes, ComfyUI might become the most widely adopted option in the future. However a community maintained fork called reForge has been created and so far has implemented most of the new A1111 stuff (and more), so no reason to switch to A1111.
"Power users" may consider ComfyUI however as I have little experience with it, it's out of the scope of this guide. Another Anon wrote a quick primer for it here, specifically for people switching to it from the WebUI.

'WebUI'

You will see the term 'WebUI' used interchangeably with the Automatic1111 version pretty often, naming conventions aren't AI's strong spot. Forge was made taking the WebUI as it's basis, so while still technically a 'WebUI', people usually just refer to it as forge.

Installing Forge

2 Ways, either through Git, or Raw. I do not recommend a raw install through the Zip at all anymore.
Git is HIGHLY recommended.

Git
1: Install Git
Navigate to a folder where you want to install Forge, right click the empty space inside the folder to bring up the context menu, choose git bash.
Copy this git clone https://github.com/Panchovix/stable-diffusion-webui-reForge.git, right click into the git bash box and paste.
Press enter and it should download into the folder.

After installing:
Run webui-user.bat. You may see other executables and such, but you may ignore those.
This will download several dependencies and will take quite a while. Afterwards it will create the venv folder which is the virtual environment in which the UI runs. In the end it should open a window in your browser.

How to switch your current A1111/Forge install to reForge
Note: I highly recommend just doing a fresh install of Forge to a new folder instead of switching branches, seems to give a bunch of users problems that are not worth their time to try and troubleshoot.
If you have either a working A1111 or Forge install this will work if you installed via Git before.
You get two choices, a main and an unstable branch. Personally I'm currently using the unstable one. In my case it complained that I should 'commit my local changes' since my webui-user.bat had a few extra args, I took them out for a second and then ran the commands, you can probably also simply delete it and it will get re-downloaded.

Taken 1to1 from the reForge page:

1
2
3
4
5
6
git remote add reForge https://github.com/Panchovix/stable-diffusion-webui-reForge
git branch Panchovix/main
git checkout Panchovix/main
git fetch reForge
git branch -u reForge/main
git pull

Replace instances of main with dev_upstream if you want to be on unstable instead. You will know you did things correctly if scheduler types are separate from your sampler and it has some new ones, in addition to the bloat tabs beneath your seed selection

In the case of errors on running the webUI

First of all don't worry, it will happen. Due to how the dependencies are structured, and how a lot of the webUI is in constant active development, it's difficult to pin point every single issue. Luckily a lot of other people have the same issues.
First try google, usually you run into github pages where other users complain about similar errors. Often you will be missing a specific library.

Second check these ghetto fixes:

Installing Python 3.10.6:

Nowadays this does not appear to be a hard requirement anymore, but this version is proven to work. The error log will usually mention something related to python versions if this is an issue. Later versions should also be fine, but I've seen people having to need this specific version still. If WebUI "cannot find python" then make sure you tick "add to path" during the install of Python.

Deleting the venv folder:

Let's it regenerate the next time you run the WebUI. Occasionally this can fix a mismatch in dependencies, especially if you altered them after running it once before.

Updating packages:

You can manually update packages inside your venv, if the webUI prompts you for something related to it. You can google how to do this and you usually find a direct how-to if you google the specific error in the log.

Fresh installing all dependencies:

Last resort, if you installed python related packages before or another frontend, sometimes you are running outdated or mismatched libraries. These tend to roam in your %appdata% folder.

Third is the nuclear option, asking in the threads:
Hundreds of genners are eager to answer your questions NOW. Make sure you drop a cute gen once you are done as reward.

1: Download PDXL, both the main file and the VAE.
Civitai is a questionably made and confusing site, the download button is here.
Alt Tag
The main model is your SDXL checkpoint. A VAE is required for a model to work properly, if you don't have one your images will look garbled.

Drop the PonyXL checkpoint in models/Stable-diffusion and the VAE into models/VAE. Once in the webUI, make sure to select both your model and the vae at the top. Set clip skip to 2.

With a minimal prompt your generation will look something like this, make sure to use 1024x1024 base resolution:
Alt Tag

For further PDXL prompt guidance and some template prompts you can look at my PDXL spoonfeed guide

You are now technically ready to prooompt and gen. Genning is somewhat convoluted and a constant learning process, the actual steps are confusing to grasp at first, but easy as piss execute once you know how. PDXL prompting in general can be confusing, hence why that guide exists.

Running SD1.5 (Easyfluff/Indigofurrymix)

Most of the general steps in the SDXL section apply.
Notably Easyfluff and Indigofurrymix (at least the version I would recommend) are vpred models. Vpred is a signifier that demands an additional step, in Automatic1111s webUI the "CFG rescale" implemented vpred capability for this through an extension.
In forge we have a built in option.

First download EasyFluff v10-PreRelease or IndigoFurryMix SE02-vpred.
IMPORTANT: you also need the corrosponding .yaml config file in the same folder, which is models/Stable-Diffusion. For indigofurrymix you can find the config directly below the main download button. For EasyFluff you find it under the same huggingface link.
Next we need a different VAE: vae-ft-mse-840000-ema-pruned, drop in models/VAE.

Once in the webUI, choose your model, choose the VAE, set clip skip to 1 (for either of them).
Next, we need to scroll down in the webUI to the "LatentModifier" dropdown, that's where our "CFG rescale" equivalent hides.
Click "LatentModifier Integrated". Remember to actually tick the checkbox to enable the module. Then scroll down until you see "Rescale Cfg Phi", set this to 0.3 for Easyfluff.

Rescale value:

The rescale value is confusing, because forge inverts its value compared to the rescale extension everyone used during Automatic1111. That means that, if you see that EasyFluff recommends a value of 0.7, you need to set it to 0.3 in forge instead, which is the inverse (we only go from 0-1 here). Indigo recommends a value of 0-0.5, but I'm unable to tell if this is using forge as it's base or Automatic1111. The author also says you can keep it off, start with that.

EasyFluff prompting:

EasyFluff has a known tendency to crank up yellow colors. (sepia:1.2) in negatives helps, however with a simple prompt like this one, it obviously tints the image pretty hard. If you encounter a tendency towards yellow tones, try adding sepia or warm colors into the negatives.

Easyfluff Embeddings:

Easyfluff users like to chuck in the furtastic negative embeds to help the model with some of it's shortcomings. If you ever see 'ubbp' 'bwu' 'dfc' 'updn' then those are exactly that. Download. These are dropped into the embeddings folder in the root of your Forge install. You add them by putting them into the !negative! prompt, seperated by commas like usual.

Alt Tag
Image by anon

Optional things: Read once you have genned a few images properly

Useful Extensions

Given that Forge comes bundled with a bunch of extensions already, I will only mention the ones not already in. Keep in mind most WebUI extensions run in either A1111 or Forge.

Extensions can be found in the extensions tab of the WebUI, you can search for them, install them, and restart the WebUI directly from here.
If an extension is not found here, you can still install it via a Github link in the "Install from URL" tab.

Restart the UI when you install an extension:

Remember to restart the WebUI, either through the button in the extensions tab or a hard reset for your extensions to actually apply.

Vital:
WebUi Autocomplete Github Link
Let's you enter prompts in the prompt box and it will auto complete tags for you based on the boorus.
Once installed, navigate to Settings/Tag Autocomplete and select your tag list.
Protip: Navigate to Settings/User Interface/User Interface and "tac_tagfile" to your set in your "Quicksettings list". This allows you to quickly change on the fly, useful if you also gen with other booru tags frequently.
You may also just use both at the same time by navigating back to Settings/Tag Autocomplete and setting your second csv in "Extra filename".

Recommended:
Forge Couple Github Link
Forge's solution to regional prompting. I recommend using the 'advanced' selection option and making a custom separator, in my case I named it ~sep~. Structure your prompt as usual, for a scene with 2 characters you would use 2 separators to divide your prompt into 3 regions, background/main scene, char1, char2.

1
2
3
4
5
6
7
8
score_9, score_8_up, score_7_up, score_6_up, source_furry,
detailed background,
beach,
pov, vaginal penetration, penis, doggystyle, from behind, high-angle view, duo, rear view, butt,
~sep~
solo, female, anthro, sciurid, squirrel,
~sep~
human man,

Afterwards go to the Forge Couple tab and manually drag and resize the boxes for the regions.

Wildcards Github Link
The wildcard extension allows you to randomly select prompts through a list of tokens. For example it lets you prompt __location__ and selects one that you have specified in a locations.txt file. This is popular for all kinds of things, species, location, artist styles.
Once installed you can open your extension folder in extensions/stable-diffusion-webui-wildcards/wildcards and create one here. Wildcards are called by double underscoring and calling the txt filename. Each line is one selection, you may also use multiple prompts in one line. You may also use the same wildcard twice in a prompt. Below is a simple example of the txt formatting.

locationst.txt

1
2
3
jungle
kitchen
river, waterfall

Prompt: __location__

Infinite Image Browser Github Link
Does not seem to be as known as some other extensions but this is one of my personal favorites. This offers you a scrollable UI tab for all your generations inside of the WebUI, with functionality to directly send them to txt2img, inpainting, etc. In general it greatly accelerates the speed at which I find stuff and copy/paste prompt portions since I don't have to look through nested folders and drop them in PNG info nearly as much to dissect them.
I have started to to keep other peoples prompts as reference points in a specific folder, and added it to the browsers directory for example, it greatly helps with organization. Another folder retains promising gens that would be good candidates for a touch up, another the finished gens I drop on the booru.

Lora Block Weights Github Link
Let's you schedule loras. More useful than it sounds, if you have a character lora that has a heavy influence on style, you could for example disable it after half steps. Swapping loras on and off adds a bit of generation time, but its worth it in some cases.
The syntax isn't very clear on the site, most of what you would do looks like this.
<lora:LucarioSDXL:1:stop=15>
<lora:PussyUncensor:1:start=15>

As per taste:
Lobe UI Github Link
A subjectively nice UI. I personally don't use it but I can imagine some might like this. Sleek black, kinda similar to SD.Next and similar frontends. A bit too much pointless clicking on tiny buttons and unnecessary bloat for me. Also the generate button isn't orange anymore. Unusable. In all honesty probably not bad if you started on it.
I heard this extension is especially useful if you use the WebUI on mobile. Supposedly a much much bigger upgrade there.

Lora

To keep it short, a Lora is something you can add on top of your given model to teach it new concepts or a specific style. Basically target training. Examples include, a lora on a character, an artist style, a concept like 'pants on head'. You may also encounter terms like 'LyCORIS' or 'DoRA', which are functionally used in the same way. These models are dropped into models/lora, even for the other types mentioned.

Some Lora are trained on a specific checkpoint, for example a loras might be trained specifically on PonyXL, or be more generic and run under SDXL. If something is trained on SDXL it will generally perform decently in Pony as well, but something trained in Pony will often not work well in another model like SeaArt

Where to download Loras:
Civit: You can filter for your given model in the top right. There's a dedicated filter for Pony based models these days too. Keep in mind that most downloaded does not imply best or well trained.
Trashcollects: The /sgd/ kitchen sink rentry for various character and artist loras. Ctrl-F is your friend. Not everything is tagged properly for SD1.5 or Pony unfortunately.
PonyNotes: Another mostly western and anime focused community run resource that has a ton of loras.

Versions:

SD1.5 Lora will not work with SDXL and vice versa. Your WebUI will usually only list loras in the lora tab based on what type of model you have selected, but they are not always tagged properly so pay attention.

Embeddings

Embeddings or Textual Inversions are a much smaller scaled solution to target training. Whereas a lora changes the 'layers' of the onion that is your mode, an embedding instead allows your model to navigate the layer better. These are popular to some degree depending on the model, but much less vital in the SDXL era. You can usually tell how sizable these are by taking a look at your token counter in the top right of the prompt box. For example 'zPDXL' fills 17 tokens, 'ubbp' fills 32, so they can have a sizable footprint.

Embeddings are usually not needed for SDXL models and can actually be counterproductive due to how sensitive these models are to prompts. Embeddings, just like lora, are designed to work on a specific model. Don't use SD1.5 embeddings with SDXL either.

SD1.5 Embeddings:
FurtasticNegativeEmbeds. These are popular for EasyFluff specifically. Easyfluff struggles a bit with some anatomy and these embeds are a collection of undesirable tags like multiple extra bodyparts. I recommend these.

PonyXL Embeddings:
zPDXL. A series of negative (and positive) embeddings, designed for specific purposes. There's one that's a generic 'high quality' embedding, there's one for realism, etc. I don't recommend these, on the basis that the page is very unclear on what these are fundamentally and I have not felt significant impact from them. The quality ones for example are meant to replace the score tags, the XXX one 'enables NSFW?'. There's a couple of similar embeddings that attempt to reduce prompt bloat by replacing score tags with a single keyword, and they all come with similar caveats. Being able to use more or less score tags is actually pretty useful, so I don't see the need for this.

Upscalers

AI upscaling in Stable Diffusion is a multi step process. By default, Stable Diffusion models are trained on a specific resolution, if you try to generate on much higher resolutions you end up with artifacts and 'hallucinations' such as multiple extra limbs.
A convenient method to generate at higher resolutions is to use "Hires.fix", which is a toggle and dropdown menu you can find under the sampler.
When you click it you get presented with a few choices, here is a safe and consistent preset to upscale:

1
2
3
4
Upscaler: 4x_NMKD-Siax_200k
Upscale by: 1.5-2
Hires steps: 0
Denoising strength: 0.35

"4x_NMKD-Siax_200k" can be found here
Download the model, then place it in forge/models/ESRGAN
If ESRGAN does not exist, create the folder. Afterwards restart the WebUI so it shows up.

What happens during Hires.Fix is a 3 step process. 1: The base image will get generated, 2: The image will be run through a simple upscaling algorithm (in this case Siax), 3: Img2Img (so the actual AI model) will pass over the gen at a low denoising value to add detail and smooth the image out.

The general strategy for genning/upscaling goes something like this:
1: Generate images at regular resolutions and aspect ratios according to the model (for SDXL this would be 1024x1024, or an aspect ratio with the same pixel density).
2: When you find a good image you would like to upscale, you press the ♻ button next to the seed, then toggle hires.fix with the values above. You will then simply generate again, and it will upscale the image.

We don't have hires.fix permanently enabled because it obviously exponentially affects generation times. The total size you can Upscale by highly depends on available VRAM.

Troubleshooting

Alt Tag

Edit
Pub: 31 May 2024 17:38 UTC
Edit: 30 Aug 2024 19:37 UTC
Views: 5548