Stable Diffusion Prompting

Unit1208

Notes

Most of this applies to any Stable Diffusion setup. I am using the StableHorde, which is fairly similar to ComfyUI for most purposes. Your service may vary, and may not have all the features mentioned, or may have more/different features. Also, while not technically Stable Diffusion, I will also include a section on FLUX

For most of this guide, I will be using a base configuration as follows:

Prompt: A brilliant forest landscape, detailed, verdant, cerulean sky  
Negative Prompt: (No Negative prompt)  
Sampler: k_euler  
Model: ICBINP - I Can't Believe It's Not Photography (New Year)
Seed: 701322108  
Steps: 20  
Guidance / cfg scale: 5  
Height: 1024px  
Width: 1024px‌  

CLIP skip: 1  

This produces the following image:
Base Image

Prompt

The prompt is one of the most important options when it comes to images. After all, the prompt is what exactly you want in your image. How exactly you prompt will depend heavily on what model and service you use. However, in general, the process will be similar.

  1. Start with a base. This can be as simple as A forest, or more detailed if you have a more concrete idea.
  2. Gradually add details. Use plenty of adjectives like *detailed*, *verdant*, *cerulean* sky. These will guide the model to what you want specifically.
  3. Once you have an image you like, upscale it and increase the step count

Negative Prompt

The negative prompt (also called Unwanted Content/UC) controls what things you do not want in your image. For example, you may want to add things like bad hands, ugly, out of focus, distorted to your NP. However, most of the time, you can simply use a pre-made prompt like the following, which tends to work well:

(worst quality, low quality:1.4), bad anatomy, bad hands, cropped, missing fingers, missing toes, too many toes, too many fingers, missing arms, long neck, Humpbacked, deformed, disfigured, poorly drawn face, distorted face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, out of focus, long body, monochrome, symbol, text, logo, door frame, window frame, mirror frame

Of course, you want to have any of the items in your image, remove them from your negative prompt.
Additionally, a prompt weighting of >1 in the negative prompt will more strongly discourage the model from including it. <1 will more weakly discourage the model from including it.

Danbooru/DeepDanbooru

Danbooru (NSFW) (SFW alternative) is an anime search engine, based around tags, like 1girl, black_hair, etc. These tags, when used with a model that supports them, can produce images with said tags in them much easier than with raw prompting. PonyXL and its descendants are some of the most popular that use Danbooru tags. DeepDanbooru is a model that can turn an image into its tags, which can be incredibly useful for getting the tags of a preexisting image. When prompting, Danbooru and its wiki can be incredibly useful for looking up tags and their definitions. In addition, Danbooru tags use underscores instead of spaces. I.e. black_hair, not black hair. The difference can vary between models and prompts, but in general, using underscores gives a better result.

See Appendix A and PonyXL for more details on PonyXL Prompting.

Prompt Weighting

Prompt weighting affects how much each part of the prompt affects the final image. In the Negative Prompt example, the first section used (worst quality, low quality:1.4). This means that the words worst quality, low quality will be weighted with 1.4x the strength of the rest of the prompt. A number higher than 1 will cause the model to pay more attention to that section, vice versa with a number less than 1.

Read the documentation for your specific generator to know what the prompt weighting syntax works for you.
In general, the prompt weighting syntax is one of the following:

Service Positive Weighting Negative Weighting Notes
ComfyUI (thing:1.2) (thing:0.8) N/A
NAI (NovelAI) {{thing}} [[thing]] Each pair of brackets multiply or divide the strength of the thing inside by 1.05
A1111 (Automatic1111) ((thing)) or (thing:1.21) [[thing]] or (thing:0.82) Similar to NAI, but each pair of brackets multiply by 1.1. ComfyUI weighting can also be used.

See also Prompt Weighting table for more details (warning: lots of images, may lag)

Model

Stable Diffusion models come in a few different architectures and versions. Depending on your service, what exactly these models are may be hidden, but they generally fall into a few different categories.

  1. “Base” Stable Diffusion
    1.5, 2.0, and 2.1 models are in this category. They tend to prompt similarly to each other, though there are some differences. The largest technical difference is that 1.5 has a native resolution of 512x512, while 2.0 and 2.1 have a native resolution of 786x786.
  2. Stable Diffusion XL
    Stable Diffusion XL has a native resolution of 1024x1024,
  3. PonyXL models.
    As the XL indicates, these are technically based on SDXL. However, they prompt very differently and can reasonably be considered different architectures.
  4. Stable Cascade
    Stable Cascade is a newer model than the others. It uses a very different architecture than most other models. It can be faster, but has fewer finetunes and LoRAs available compared to other base models.
  5. Stable Diffusion 3
    No.
  6. Flux models are a separate branch of models, but are similar enough to Stable Diffusion to merit inclusion.

XL Models

SDXL Models tend to prompt in similar ways to 1.5 and 2.0 Models. However, their native resolution is different. Instead of 512 or 768 square, SDXL models “prefer” to generate in 1024x1024 images.

AlbedoBase

One of the most common SDXL models is AlbedoBase XL. It performs well in most cases, and doesn’t need much trial and error to get a good-looking image.
AlbedoBase XL

PonyXL

PonyXL Models, while still technically based on XL, can reasonably be treated as a different model. These models tend to respond very well to Danbooru tagging. They also have a few tendencies that set them apart. You MUST use CLIP Skip 2.

Score Tags

PonyXL models were trained on both Danbooru tags and a few custom ones, such as the score tags. If you’ve seen any PonyXL prompt with score_9 or something along the lines of score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, these control the quality of the resulting image. The creators of PonyXL have written a post on how these tags work, but the short answer is that during training, images were tagged by how good they were. Then, the model learned that good looking images had score_9, ... tags with them and vice versa. So, to get good images with PonyXL, score tags are very important.
For an example of how CLIP Skip affects PonyXL:

x CLIP Skip 1 CLIP Skip 2
No Score Tags CLIP Skip 1, no score tags CLIP Skip 2, no score tags
Score Tags CLIP Skip 1, score tags CLIP Skip 2, score tags

Even with CLIP Skip 2 and score tags, base PonyXL still doesn’t look particularly good. For this image, I used the following parameters:

Prompt: [score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up,] A brilliant forest landscape, detailed, verdant, cerulean sky
Negative Prompt: (No Negative prompt)  
Sampler: k_euler  
Model: Pony Diffusion XL (V6)
Seed: 701322108  
Steps: 20  
Guidance / cfg scale: 5  
Height: 1024px  
Width: 1024px‌  

CLIP skip: 1 and 2 (See above)
Pony LoRAs

Base PonyXL requires a LoRA to actually look good, at least for most prompts (and intents). In general, the “Styles for Pony Diffusion V6 XL (Not Artists styles)” set of LoRAs are very good for getting a better image than plain Pony can create.

x No LoRA LoRA
No Score Tags No LoRA, No score tags LoRA, No score tags
Score Tags No LoRA, Score tags LoRA, Score tags

For these, I used the Faux Oil Painting LoRA, from the aforementioned “Styles for Pony Diffusion V6 XL (Not Artists styles)”.

Flux

Flux models are a series of models created by Black Forest Labs (BFL). There are three types of models: pro, dev, and schnell.

BFL name Common name Description
FLUX.1[pro] (Flux) Pro Base, Closed-weight model, only available through BFL’s API.
FLUX.1[dev] (Flux) Dev Guidance-distilled version of Flux Pro, open-weight, available for non-commercial use.
FLUX.1[schnell] (Flux) Schnell Guidance- and step-distilled version, available freely through the Apache 2.0 license.

Flux Schnell

This guide will focus almost entirely on Flux Schnell, as it is the only model of the three that is currently available on the Stable Horde. Flux Schnell, due to being both guidance- and step- distilled, has some unique setting requirements that can make it difficult to get right. Schnell requires 4-8 steps. Guidance must be exactly 1. Karras must be turned off. Schnell also does not support negative prompts, and will ignore them.

Additionally, Flux prompts slightly differently to other models. It does better with natural language prompting, rather than the short “tagging” style common in SD 1.5, 2.1 and XL. It also requires less quality descriptors. For instance, instead of red car, detailed, best quality, high quality, hyperrealistic, DOF, sports car, it may be better to prompt Flux with A hyperrealistic red sports car.

Text with Flux

Generating text with Stable Diffusion models almost always resulted in illegible messes. However, it has slowly gotten better. Flux models are some of the best at generating text, but it does require some prompt engineering to do so. Imagine you wanted to generate an image with a CRT with the word FLUX displayed on it in an abandoned house. As FLUX is good at natural language, simply prompting with this description (A CRT with the word FLUX displayed on it in an abandoned house) gives good results. CRT Flux

With very little “tagging”, FLUX can accurately give the text that is desired while also following the prompt.

CLIP Skip

CLIP Skip is not the easiest parameter to describe.
In general, a higher CLIP Skip tends to make images more generic. As an metaphor, at CLIP Skip 1, The red Honda Civic would be interpreted as The red Honda Civic. At CLIP Skip 2, it might be interpreted as The red Honda, and at 3, it might be the red car. This isn’t exactly what happens, and it isn’t entirely accurate. However, it should give a better idea than the actual definition:

CLIP Skip controls how many layer of the CLIP model are skipped, which affects the embedding of the prompt, which then controls the image model.

In general, whatever the lowest setting is will be no layers skipped, so adjust for that. In this case, the lowest is 1, which means that 2 corresponds to 1 layer being skipped.

As the CLIP Model has 12 layers, CLIP Skip ranges between 0-11, or 1-12 (Depending on the system used)

CLIP Skip Result
1 CLIP skip 1
2 CLIP skip 2
3 CLIP skip 3
4 CLIP skip 4
5 CLIP skip 5

Higher values tend to follow this trend, with fewer coherent details.

However, CLIP Skip has special meaning for PonyXL models. As mentioned, PonyXL Models MUST be used with CLIP Skip 2. PonyXL Models were trained with CLIP Skip 2, and so only understand CLIP Skip 2 (or greater). This is an extremely common mistake, and also extremely easy to fix.

Steps

Steps control how many steps the model is run for. In general, the higher the step count, the better the image. However, this rapidly hits diminishing returns. The difference between 10 steps and 20 can be an incoherent vs a coherent image. However, the difference between 60 and 70, or even 60 and 120 can be negligible. In general, a step count of between 20 and 35 is a good tradeoff between quality and speed. For final generations, 40-60 may be used, but it depends on the image.

Steps Result
1 1 Steps
2 2 Steps
4 4 Steps
5 5 Steps
6 6 Steps
7 7 Steps
8 8 Steps
9 9 Steps
10 10 Steps
15 15 Steps
20 20 Steps
30 30 Steps
40 40 Steps
50 50 Steps
60 60 Steps
80 80 Steps

Scheduler/Sampler

The scheduler (also called the sampler) controls how much each step affects the image. For most cases, this doesn’t matter.
k_euler is one of the most popular, and works well in general.
While there isn’t a real difference between most schedulers, there are a couple that deviate. Schedulers with “a” or “ancestral” are non-deterministic. That is, for the same seed and other configuration, the will give different results. They also never converge, so adding more steps will not improve image quality after a certain point. Schedulers with a ‘2’ effectively use twice as many steps, which can give better results at the cost of speed. More accurately, they are 2nd order scheduler. DPM (Diffusion Probabilistic Model) schedulers are designed specifically for Stable Diffusion, and can give better results, but in many cases, are nearly identical. k_dpm_fast is often noisy and incomplete. Most of the time, this is not what you want; however it can be useful as a glitch effect.

k_heun and k_lms schedulers tend to look very similar to Euler, but can have sharper edges.

More Information

Name Image
k_euler (Euler) k_euler
k_euler_a (Euler Ancestral) k_euler_a
k_heun (Heun) k_heun
k_lms (LMS) k_lms
k_dpm_2_a (DPM 2nd order Ancestral) k_dpm_2�klzzwxh:0135�a
k_dpm_2 (DPM 2nd order) k_dpm_2
k_dpm_adaptive (DPM Adaptive) k_dpm_adaptive
k_dpm_fast (DPM Fast) k_dpm_fast
k_dpmpp_2m (DPM++ 2nd order (m?)) k_dpmpp_2m
k_dpmpp_2sa (DPM++ 2nd order (sa?)) k_dpmpp_2sa
k_dpmpp_sde (DPM++ (sde?)) k_dpmpp_sde

//TODO: What are m, sa, and sde for k_dpmpp samplers?

LoRAs

LoRAs are fine-tunes to models that do not require the overhead associated with a full model. In addition, multiple can be added atop each other. However, do not add too many, as the resulting image tends to devolve into colorful blobs. LoRAs also have an associated baseline that they work with. SDXL LoRAs will only really work on SDXL models; SD1.5 LoRAs will only really work with SD1.5 models, etc. Since PonyXL models are technically based on SDXL, LoRAs on one will occasionally work on the other. In most cases however, it is better to treat PonyXL as its own baseline model for most purposes, LoRAs included.

LoRAs are generally used to style a model in a certain way, or to add knowledge of a specific character to a model.
As an example, the Faux Oil Painting LoRA that was used in Pony LoRAs is a style LoRA. This Skadi from Death Must Die is a character LoRA. While there are exceptions, LoRAs generally fall into one of those two categories.

LoRAs also sometimes have “Trigger Words”, which actually activate the LoRA. Sometimes, the effect can be more easily controlled when using trigger words, but it varies widely. When in doubt, if a model suggests using trigger words, use them.
For the following tests, the the Logo.Redmond (SDXL) LoRA and AlbedoBase XL model were used. Logo.Redmond has “logo, logoredmaf” as trigger words.

x LoRA No LoRA
Trigger words Lora and Trigger No Lora and Trigger
No Trigger Words Lora and No Trigger No Lora and No Trigger

Without the LoRA, the “logoredmaf” token doesn’t mean anything to SDXL, and so it takes the literal “red” from “logoredmaf”. Also, when using the LoRA without trigger words, the image is still affected. However, it does not apply the intended effect.

CFG Scale / Guidance

CFG Scale, also called ‘Guidance’ controls how much the prompt affects the image. Lower values will increase the amount of creativity, while higher values will make the image follow the prompt more. In general, this should range from about 3-8. Higher or lower values can be used, but a good default is 5. Extremely high CFG values will cause an incoherent image. Extremely low CFG values often follow just the most basic parts of the prompt, like forest in this case.

CFG Scale Result
1 CFG 1
2 CFG 2
3 CFG 3
4 CFG 4
5 CFG 5
6 CFG 6
8 CFG 8
10 CFG 10
12 CFG 12
14 CFG 14
16 CFG 16
18 CFG 18
20 CFG 20

Appendices

A: PonyXL prompt development

For this, I started with this image, using its tags and tweaking them slightly until I got a result that I liked.

Prompt: 1girl, black_hair, bow, dress, hair_bow, highres, long_hair, long_sleeves, looking_at_viewer, open_mouth, puffy_sleeves, smile, solo, very_long_hair, violet_eyes, wavy_hair, white_background, white_bow, white_dress
Sampler: k_euler
Model: Pony Diffusion XL (V6)
Seed: 701322108
Steps: 20
Guidance / cfg scale: 5
Height: 1024px
Width: 1024px
Karras: true
Hi-res fix: false
CLIP skip: 2
tiled: false
Step Action Image
1 Original Prompt Original Prompt
2 Added Summer Days LoRA Added LoRA
3 Added Score Tags Added Score Tags
4 Added Negative Prompt Negative Prompt

The final configuration was as follows:

Prompt: score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, 1girl, black_hair, bow, dress, hair_bow, highres, long_hair, long_sleeves, looking_at_viewer, open_mouth, puffy_sleeves, smile, solo, very_long_hair, violet_eyes, wavy_hair, white_background, white_bow, white_dress
Negative prompt: (worst quality, low quality:1.4), bad anatomy, bad hands, cropped, missing fingers, missing toes, too many toes, too many fingers, missing arms, long neck, Humpbacked, deformed, disfigured, poorly drawn face, distorted face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, out of focus, long body, monochrome, symbol, text, logo, door frame, window frame, mirror frame

Sampler: k_euler
Model: Pony Diffusion XL (V6)
Seed: 701322108
Steps: 20
Guidance / cfg scale: 5

Height: 1024px
Width: 1024px
Karras: true
Hi-res fix: false
CLIP skip: 2
tiled: false

And the Summer Days LoRA.

If your image results in a blob like the following, here are some tips to help prevent it.

PonyXL CLIP Skip 1

Pony CLIP 1
These blobs of color are often caused by using PonyXL (or any of its derivatives) with CLIP Skip 1. Remember, ALWAYS USE CLIP SKIP 2 WITH PONY MODELS.

CFG Scale too high

CFG Scale 20

This is caused by a CFG Scale that is too high. Try lowering it. Images may also look “baked”, with extremely saturated colors.
Pony CFG 20 Pony CFG 30

Using LoRAs with extremely high or extremely low weights or strengths (More than 1.5 or less than -0.25) can cause images to be “baked” in a similar way.
Pony Lora Extreme Strength

Edit
Pub: 04 Aug 2024 20:17 UTC
Edit: 21 Nov 2024 21:11 UTC
Views: 878