Bex's Stable Diffusion Tips and Tricks; and General Usage Guide for Illustrious Models


  1. Introduction
  2. Installing Stable Diffusion Locally:
    1. System Requirements
    2. Choosing a Frontend
    3. Installing Stable Diffusion WebUI reForge
    4. Stable Diffusion XL Models
    5. What to do once you got it installed
      1. What to do first; Command Line Flags
      2. User Interface; Basic Setup
      3. User Interface; txt2img
      4. UI Settings
      5. Extensions
      6. We're ready?
  3. Prompting 101: Basic Principles, Structure, and Why Illustrious is Amazing
    1. Why Illustrious?
    2. "Where do I get the tags???"
      1. Brackets and Undrescores
    3. Basic Prompting: Tag Types and your First Prompt
      1. Step 1: People Tags
      2. Step 2: Character Tags
      3. Step 3: Action Tags
      4. Step 4: Appearance Tags
      5. Step 5: Background Tags
      6. Step 6: Image Composition and Style Tags
      7. Step 7: Artist Tags
      8. Step X: Quality Tags
      9. Result
  4. Advanced Section: LoRAs and Artist Tags, Comprehensive Tagging, Advanced Syntax, img2img and More
    1. Preamble
    2. Prompt Order
    3. Quality tags; Do we need them?
      1. Quality tags; Compilation
    4. Negative Prompting: How and When
    5. Tags; Evaluating if You Should Use a Specific Tag
      1. Tags; How Including Redundant Tags Ruins Gens
      2. Tags; Tag Bleeding, and How To Avoid It
      3. Tags; How Tags Impact Seemingly Unrelated Things
      4. Tags; Rating Tags and How to Avoid (or Embrace) Horny
      5. Tags; How Models Won't Do Anything Unless You Ask Them
      6. Tags; Conclusion
    6. Prompt; CLIPs, and Why You Should Follow The Prompt Order
    7. Syntax; Introduction
      1. Syntax; Separators
      2. Syntax; Prompt's Weight Manipulation
      3. Syntax; BREAK is love, BREAK is life
        1. Multiple (Defined) Character Gens without Extensions
      4. Syntax; Prompt Merging, Delaying Prompts and Keeping the Quality with Styles
      5. Syntax; Prompt Addition
    8. LoRAs
    9. Generation Parameters; Introduction
      1. Generation Parameters; Align Your Steps, and Full-Quality Gens in Just 12 Steps
      2. Generation Parameters; Extra-Quality Gens in 60 Steps
      3. Generation Parameters; LCM, and Draft Gens in Just 5 Steps
      4. Generation Parameters; IPNDM_V, My New Favourite Sampler
      5. Generation Parameters; Hires
      6. Generation Parameters; Refiner
    10. Prompting; Iterating on Your Gens
      1. Prompting; Prompt Complexity
  5. Conclusion
  6. Special Thanks
  7. To-Do List
  8. Update History

Introduction

Hi there, it's Bex. You may know me from Statuo's Aegis City Discord or Chubcord. While both are botmaking/chat-botting oriented, image gen is a huge thing in the hobby and is fun in general. This Rentry is a place for me to organise my experience, some tricks, and technical stuff that will be useful to simply look up. I also intend this Rentry to be a useful resource for people inexperienced with Stable Diffusion and/or Illustrious-based models, as well as people who simply want to learn more. You may be surprised by how much stuff is unintuitive, almost not documented and hard to find and understand. While initially it started as a comprehensive guide for everything related to Stable Diffusion and Illustrious, I had a realization that if I actually stick to it and do all the research and investigation, all the time in the world won't be enough, and I want to eventually finish this thing, so let's get it straight: while I will be touching on a lot of stuff, this is not a comprehensive guide. I will not be explaining every single little thing. This is for people who are interested in Stable Diffusion and want to get more from their gens, so I expect you, dear reader, to have agency of your own. I'll be saying it many times throughout the guide, but: try doing things yourself, experiment, do the opposite of what I say if you want to, fail again and again to get the gen of your dreams; be creative. Anyway, let's get to it. If you have any questions, advices or corrections, feel free to contact me on Discord, @bextoper. I'm in no way an expert in Stable Diffusion or image gen, but there's a lot of stuff I've personally struggled with that took me months to learn and overcome, and I think it would be valuable to share some of it.


Installing Stable Diffusion Locally:

Of course, before generating anything, we have to have something to generate it with. While there are numerous sites and services that do it on their servers, they leave you with much less customisation, convenience, sometimes speed, and, of course, they are paid. I'm no expert on these services, I only ever used Civitai just to try Early Access checkpoints, there are a few of them that people on Aegiscord keep mentioning and using, here are two I remember: Pixai.art and Tensor.art There are also Novel.ai, Midjourney and others that are good, but they're not Stable Diffusion and I'm not knowledgable in them at all. These services have little to no customisation or setup required, so there's no point in talking about them specifically.
The best option available is, of course, local Stable Diffusion. There is an option to run it from something like Google Colab, but these Colabs come and go as Google bans them, so I can't provide any links. There should be numerous guides about them anyway, just Google it. The purpose of this section of the guide is to explain how to get Stable Diffusion running on your hardware, so let's get to choosing an app for that.


System Requirements

First, let's talk about hardware you need to run SD locally. For PC users, you have to own an NVIDIA GPU with at least 8 GB of VRAM (for AMD GPU users, see SD.Next below). SDXL models use 6 gigabytes of VRAM themselves, and including fluctuations from the ongoing generation, moving models around, and external VRAM usage from Windows or, for example, your browser, total VRAM usage may easily reach somewhere from 7.5 GB to 8 GB max on my RTX 3070 8 GB. It can be possible to run SDXL with 6 GB of VRAM, a lot of frontends have optimisations for that, but be ready for slow generation speed, increased model moving times and CUDA Out of Memory errors. It is technically possible to run SD on just RAM, but it will be so extremely slow it's not worth it at all (but later in the guide I will talk about one method that can make it just barely viable). For two Mac users out there, SD runs on any Apple Silicon chip; the main issue will be the speed of Unified Memory. While it's usually much faster than conventional RAM, it's still far from VRAM speeds, so while it's more than possible and pretty simple to install and run, do not expect high generation speed.


Choosing a Frontend

Let's talk about frontends: they are essentially environments for Stable Diffusion. They offer various performance, compatibility, quality and customisation, which are extremely important. Let's talk about some of the most popular of them:
(first, some explanations. IT/s, or Iterations per Second is a metric that shows the amount of generation steps completed in a second. It depends on the Resolution of the image, so all numbers I'll show are from a 832x1216 generation, which is equal to 1024x1024.)

  • AUTO1111. The oldest and most known frontend, the good old stable-diffusion-webui. I myself used it as my first frontend years ago during the great days of Stable Diffusion 1.0. Sadly, it is extremely outdated in terms of performance: it offers literally 10x slower speeds than some other options I'll mention. I mention it here mostly for people that remember how good it was, because it's no longer the best option available; only use it if you know exactly why you need it. As a somewhat subjective metric for performance, AUTO1111 gives me just 0.15-0.20 IT/s, which is about 5-6 s/IT. There are forks of it available, for example,
  • ForgeUI. A huge fork for AUTO1111 that offers big improvements in terms of models it supports and, most importantly, the performance. It retains the incredible interface of AUTO1111 while providing more in-built features. From my testing, it's the second best performing frontend, giving me somewhere from 2 IT/s to 2.3 IT/s. It also has some downsides; it doesn't support any AUTO1111 extensions, updates for it can be very rare and, most importantly, it displayed a noticeable generation quality drop relatively to AUTO1111 and other frontends I'll mention. As a very broad number, just to give you some understanding, I'd say that Forge shows about 10-15% quality loss compared to AUTO1111 and ComfyUI. While ForgeUI is a solid choice, I have other frontends to suggest.
  • reForgeUI. It's a fork of ForgeUI. It's extremely similar to ForgeUI, but has some advantages over it. First, it fully supports most of AUTO1111's extensions. Second, it shows much less quality loss that I'd estimate at 0% to 5% compared to AUTO1111 and ComfyUI while retaining most of the performance; to me, it's somewhere between 1.9 IT/s and 2.1 IT/s. Third, it's updated constantly, ensuring the most stability and support for new models. This is a frontend I personally use as my main, and I highly recommend it to you. I'll be talking specifically about it throughout the guide.

Important Update: On 13th of April 2025, reForge ceased development. This means that it will no longer receive any updates, so new foundational models may not work on it. So far, everything still works, so I won't be changing this section right now, but I'll keep this guide updated in case anything breaks. I may update this guide to have a ForgeUI installation instead, but so far, I don't see any reason to switch from reForge. In case you want to switch to an actively updated frontend, I suggest ForgeUI (as it's extremely similar to reForge) or Comfy (if you're willing to spend time with it; read the paragraph below). Thanks to @Bandit on AegisCord for bringing it to my attention.

  • ComfyUI. Arguably the best, most powerful, most customisable, supported and versatile frontend available. While it lets you do absolutely everything with your generations and receives support for new image and video generation models first, it's extremely complex, difficult and convoluted. Essentially, you're building every single function that you need yourself, from basic txt2img and img2img to Inpainting, Hires and others. There are community made workflows available, so you don't have to figure out and build them yourself, but you'd still have to understand them, switch around between workflows to get different functionality and figure out how to even use them. For all of that, it offers the best quality and performance out of all frontends, giving me about 2 IT/s at full, beautiful quality. If you're willing to spend your time understanding everything Comfy offers, tinkering with nodes, building features and such, it is the best option available, but for a regular user, it might just scare you out of doing anything image-gen or local related ever again. Just know that it exists, and if you feel ready, give it a shot.
  • SD.Next. More of a honorable mention, this frontend is the only one I know that works with AMD GPUs on Windows and supports AMD ROCm. Stuff like ComfyUI works with AMD GPUs too, but it requires Linux.

Installing Stable Diffusion WebUI reForge

Just follow the instructions on the Github page. One notable thing; make sure you have Python 3.7 to Python 3.12 installed (one of them), and Python 3.13 NOT installed. reForge will NOT run with Python 3.13 on your PC. To install Python, you can do it from Microsoft Store, just search Python 3.10 for example. To check if you have Python 3.13 installed, check you Applications page in Settings and delete it if needed.


Stable Diffusion XL Models

Let's talk about the most important thing: models we're going to use. First, you should get your models from civitai.com, it has the biggest library of models to download. To search for them, go into "Models", then "Filters" and check "Checkpoints", then "Illustrious" below. Most of these models (especially more popular ones) will be Illustrious 0.1, but there are some new models based on Illustrious 1.0 and 1.1 that are slightly different (and, in my opinion, worse). To install, just put a .safetensors model file into models/stable-diffusion. Here are some models I recommend:

  • Amanatsu 1.1. My current main model that I use to generate almost everything. It's based on Hassaku and made by the same people, and it's incredible. It has a style I adore and it shows some of the best prompt adherence I've seen. Great character knowledge and detail + quite good backgrounds. Works well with Artists, LoRAs and styles.
  • Hassaku XL. The model Amanatsu is based on. Has a more neutral style while keeping the greats of Amanatsu. A solid model. I suggest using version 2.1, but there can be arguments made that v2.1fix is better, or that v1.2 is better, and, well, they are in some aspects, but I consider them to be just slightly different from each other.
  • WAI NSFW Illustrious XL. A model of the WAI family, great and versatile. I consider it to be about the same as Hassaku quality-wise, just a different style. I suggest using v12, I didn't like v13 that much.
  • Nova Orange. A model I found somewhat recently. While it suffers from a not-so-great prompt adherence, it has maybe the best quality I've seen, in both detail and backgrounds.

The list is not exhaustive. If you want to see other models and what they have to offer, I did quite a few model comparisons on Aegiscord. File sizes of them are enormous, I can't just link them with Catbox, so you can just go to the #image-spam channel of Aegiscord to find my thread called Bex's Model Comparisons. Link.


What to do once you got it installed

What to do first; Command Line Flags

At this point I assume you have reForge installed. Once it's running, you'll have 127.0.0.1:7860 link opened in your browser. Now, close Stable Diffusion and go to the Stable-Diffusion-WebUI-reForge folder. There, find a file called start.webui.bat. Open it in Notepad. Find a line that has COMMANDLINE_ARGS and add the following to it: --cuda-stream --pin-shared-memory. These arguments speed up the generation and model moving times significantly without serious (or noticeable, really) impact on quality. For people who want to access reForge instance on other devices in your local network, also add --listen. Note that --listen completely blocks any actions with extensions. Remove it, add extensions you want and put it back. Get yourself a model, put it into models/stable-diffusion, and launch start.webui.bat again.


User Interface; Basic Setup

Let's start from top to bottom, left to right. First, the top line of buttons:

  • Stable Diffusion Checkpoint: your current model + you can choose to load another model. Use the Refresh button if you add new models to models/stable-diffusion while SD is running.
  • SD VAE: to choose which VAE you want to run. In 99% of cases you want to leave it at Automatic and never touch again.
  • CLIP Skip: In my understanding, it's the amount of deepest CLIP layers SD skips during Prompt Processing. With SDXL, you almost always want to keep it at 2 and never change again.
  • txt2img: a tab for creating images from text. We'll be focusing on it throughout the guide.
  • img2img: a tab that uses a reference picture for generation in addition to text. Will explain separately later. Maybe will add a section about it some other time.
  • Extras: a tab where extra functions live. Most of the time there's nothing to do there.
  • PNG Info: a tab where you can get metadata of your and other's generations and immediately move it to txt2img and other tabs.
  • Checkpoint Merger: used for merging different models together. I won't be talking about it.
  • Train: A very scary section with questionable usefulness. I won't be talking about it at all.
  • Settings: Self-explanatory
  • Extensions: a place where you can turn on/off and download extensions.
    Now, second row of top buttons specific to txt2img:
  • Generation: our main tab for txt2img generation.
  • Textural Inversion: a tab where all your downloaded embeddings live. I don't plan on touching them here as I don't use them, so, a few words about them: from my understanding, you use them to apply Quality tags and such to a generation without actually typing them in as tags. While the idea is cool, I prefer having full control over my generation, and these embeddings do God knows what. If they work for you - great, I prefer not using them at all.
  • Hypernetworks: a very scary tab I have 0 idea about. I think it's to train models or something? I won't be talking about it at all here.
  • Checkpoints: a tab that shows all of your models. You can assign images to them by clicking the Edit Metadata button --> Replace Preview. It'll be replaced with the current image selected in txt2img. To delete the image, go to models/stable-diffusion and remove a checkpoint_name.png.
  • LoRAs: a tab that shows all your LoRAs. You can assign images to them the same way as with Checkpoints.

User Interface; txt2img

Finally, let's talk about txt2img buttons and how to setup them to get to genning quickly. I'll explain most of them later separately and in more detail, this is just to get started.

  • Prompt: A place where you input your Positive Prompt. At the top right you can see a 0/75 indicator: this is the amount of tokens in your prompt out of CLIP's capacity. I'll talk about CLIPs in more detail later, but for most cases, you should aim to not exceed 75 tokens, unless you know exactly what you're doing and how. This is not a hard rule, and if you go above 75, another CLIP will be created, but your prompt has the most efficiency when you're keeping it within the single CLIP. There are a lot of exceptions and cases when you'd want to have multiple CLIPs, but I'll talk about it later.
  • Negative Prompt: The field where you input your Negative Prompt. You should only have your negative Quality tags there for the most cases. A rule of thumb is, unless you have a specific reason to add them, don't. Negative prompting is very powerful, and being careless can degrade your gens a lot. For example, if you have monochrome images popping of frequently, you may add monochrome to the negatives; they should have a reason to be there. I'll touch on them in more detail later.
  • The Arrow Button: inputs your previous generation parameters into the UI. It also works on SD restart. It's great if you refreshed the page.
  • The Trash Can Button: Clears both Prompt fields. Can be buggy, I don't recommend using it.
  • Notebook Button: Moves prompt from enabled Styles to Prompt fields and disables the Style.
  • Blue Spiral Button (usually hidden): It appears if you refresh the page in the middle of the ongoing generation. Allows you to get the prompt, previous settings and images from this gen.
  • Styles Field and the Pen Button: It's an option to have a set combination of tags be added to your prompt. You can set them up by pressing the Pen button, you can see an example below. By default, the tags are added like this: 1girl, standing, ...your prompt..., <style>. You can make Style prompt be added to any position of the prompt, by using {prompt}, for example, {prompt}, masterpiece, good quality, amazing quality,.
  • Sampling Method: This is where you choose your Sampler. While I won't go into what they do exactly, I can recommend using DPM++ 2M, and, if for some reason it doesn't work on some specific model, Euler a. I'll touch on some other Samplers later.
  • Scheduler Type: It's a secondary setting for Samplers. With DPM++ 2M you should use Karras, and with Euler a you should use Normal. There are other useful Schedulers I'll mention much later.
  • Sampling Steps: how many generation steps are done for a single image. I recommend having it from 26 to 34. I use 28, 30 is also very good. There are methods to drastically reduce the amount of steps needed (thus decreasing the generation time) with Hyper LoRAs, LCM or AYS, see further ahead.
  • Width and Height: options to set the resolution of your gen. SDXL is trained on 1024x1024 images and finetuned on a number of other resolutions. Here is a list of good SDXL resolutions. I personally use either 832x1216 or 1216x832 for either Landscape or Portrait, as it's very close to a regular 2:3 aspect ratio. Note that there is no way to natively generate 16:9 images, do not use resolutions like 1366x768 and such, they will result in artifacts. Refer to the link above to see what aspect ratios are supported.
  • Batch Count: how many images are generated separately. I recommend using it exclusively unless you have a GPU with enough VRAM to support simultaneous generation of multiple images.
  • Batch Size: how many images are generated at the same time. Extremely VRAM intensive and not much faster than generating images one by one, I do not recommend using it. Leave it at 1.
  • CFG Scale: A very tricky setting. Basically, the higher CFG is, the more model will adhere to your prompt. With Illustrious models, you get cohere generations in the range from CFG 3 to 7. I recommend using 5 and keeping in mind the option to move it to 4 or 3.5 the more complex your prompt gets.
  • Seed: a number that defines your generation. By reusing the exact same prompt and exact same seed from another generation, you will get the (almost) exact same image. -1 for Random (or by pressing the Dice button). Recycle button allows you to automatically enter the seed from previous generation. It's very useful if you want to test LoRAs or models, or Hires/Stylize an already genned image.
  • Other menus below: These are all the different extensions you have installed. I will talk about some them separately.
  • Gallery buttons: You have a set of useful different buttons under the image. They are explained if you hover over them, so I won't be explaining them. Just know that they're there, and that they're useful.
  • Metadata: Under gallery buttons there is text. This text is the metadata from the current image; it contains your prompt, generation parameters, extension parameters, etc. This text is useful if you or someone else wants to replicate the exact image you got.

UI Settings

While there is a ton of settings, I want to tell you about some of the most important. I'll be referring to the by their name in the list on the left:

  • Defaults: Allows you to save all your current parameters and settings as a default, so that SD will be using them on launch. Press View Changes to see what exactly it'll save, and Apply to save them. Very useful to not have to setup basic things like CFG, Samplers and Resolution each time you launch SD.
  • Live Previews: This section is responsible for Live Previews of images that are still generating. While it is extremely useful, it takes a bit of performance away, so if that's an issue for you, turn it off. You can also set the period of Live Previews renders; I use 7, for example, because I always use 28 Generation steps, so I get a preview every quarter of the generation. You can also change the Preview Method; I use TAESD, as it's almost doesn't have quality degradation from it being a preview and I didn't notice much of a performance loss.
  • Infotext: allows you to disable writing generation metadata into the image. While I'm all for image gen being an open-source, completely collaborative hobby where we all learn from each other to improve, people should have an option to hide their generation parameters and prompts from others. Feel free to do so.
  • Paths for Saving: allows you to change the folder to where SD saves your generations.
    These are most notable settings. Feel free to explore them yourself, I could've easily missed something important.

Extensions

While base reForge is more than enough for most stuff, there are some very useful extensions you can download. First, how to install them. Go to the Extensions tab, then press the Available tab under it and press the big blue Load From button. It will give you a list of all the available extensions. Once you installed it, go to Installed tab and press Apply and Restart UI.


Note that if you're using --listen, reForge will not allow you to install, enable and disable extensions. Remove the arg, do what you need with the extensions and put it back. Now, some cool extensions:

  • Booru tag autocompletion prompting: it functions as an autocomplete for Booru tags, containing, I think, literally all of them. Very convenient and useful even if you already memorised them. You can press Tab to put in the top result and arrows to move between results. On pressing Tab, it also auto-formats (brackets) into \(sd-recongisable brackets\) and auto-removes _underscores_. Very cool.
  • ADetailer: an essential extension for almost everyone. What it does is: once the image finishes generation, it automatically detects any faces and re-generates them in full resolution, resulting in crisp and detailed faces in any situation. It doesn't require any setup; you just install it, enable and voila, now character faces look beautiful.
  • Regional Prompter: I never figured it out, but a few people on Aegiscord mention it from time to time and show impressive results with it, so I'll mention this extension too. What it does is: it separates the image area in X areas and generates characters separately for each one. It's extremely useful for multiple character gens, but is completely unintuitive and I didn't have enough patience to learn it; Illustrious is capable of generating 2 or 3 characters by itself with some tricky prompting I'll talk about in detail later. If I ever learn this extension, maybe I'll expand this section or make a new one.

We're ready?

I think that's all the technicall stuff need to know to get to image generation with reForge. Let's get to the deeds.


Prompting 101: Basic Principles, Structure, and Why Illustrious is Amazing

All gens start from a prompt. Positive Prompt field, arguably, is the place you're going to spend the most time at. It may look deceptively simple; you just write what you want to see in the image, right? Well, yes, but no. This is where we have to understand how Image Generation models read and process your prompts. I promise it's not as technical as it sounds. I like to think that there are two main ways models understand what you want from them (or, more correctly, two different types of captioning the Training Data models are built upon):

Natural Language: The name pretty much explains it all; you just type what you want to see verbatim. For example, the prompt for a model that uses Natural Language may look like this: a girl stands in a flower field and looks at the mountains, the wind lifts her hair. She wears a backpack. her hair is blonde and long, etc.... It's good in a sense that you can just describe the image in detail; it's especially good when it comes to actions, but, in my personal opinion, the fatal flaw of this approach is that you cannot know the most optimal way to prompt. When there's literally an infinite amount of ways to describe something, how do I do it at least decently? The question may seem silly, but it's actually incredibly frustrating when you get dozens of shitty gens; you know the model is capable of doing great stuff, but when you look into the prompts of these great images, you see that it can easily be 100+ words long. Unless you spend countless hours trying and failing, you won't even get a somewhat good looking image. That's why I prefer the other way to prompt. For example, Midjourney and Pony-Based Stable Diffusion XL models use this method, while stuff like NovelAI's Image Gen supports it partially.

Tag-Based Prompting: It means that there are specific tags the model understands and uses to generate the image. For example, the prompt above will turn into 1girl, looking ahead, blonde hair, long hair, backpack, floating hair, mountain, scenery, landscape, flower field, from behind. It may look less coherent, but unlike with Natural Language, this time I know for sure how to word it, and that the model will understand what I want from it. It opens other opportunities like Artist Tags, Character Tags and such, but we'll talk about that later. Tag-Based prompts are simple to build, easy to understand, and very effective. Not to say that it doesn't have its own drawbacks, but we'll talk about them later. Almost any Image Generation model supports it at some capacity (for the simple reason that these tags are, in fact, words), but Illustrious-XL v0.1 and NoobAI based models work exclusively with tags, while NovelAI prefers them but still works with Natural Language. For the sake of this guide (and things I actually know myself), we'll be talking only about Tag-Based Prompting.


Why Illustrious?

There are quite a few Image Generation options out there. For the sake of this guide, I won't touch on NovelAI, Midjourney, Dalle, and Stable Diffusion architectures besides SDXL; I simply don't know much about them. But when it comes to SDXL, there are three main base models that other people build upon: Pony Diffusion XL v6, IllustriousXL v0.1 and NoobAI. The main reason why I love Illustrious is because it's incredibly easy to prompt; you just "build" the image with tags, and usually it just works. From my personal experience, Illustrious also has a much higher image quality than Pony-based models; sometimes, Illustrious-based models just excel in stuff like small details and backgrounds and more important stuff like prompt adherence, image composition, stylization, and character adherence. Like literally anything out there, it takes quite a bit of learning, trial and error, research, and even more learning, but we'll go through the most important aspects of generating images in this guide. Let's get to it, I guess.

Small addendum about NoobAI. While it's technically different from Illustrious, it's prompted the same way. As long as the NoobAI model you're downloading is eps-pred (sometimes also called e-pred), you won't see any difference from Illustrious models, at least in technicalities. You should generally avoid v-pred models, they suffer from contrast and color issues, and may work badly with LoRAs.


"Where do I get the tags???"

Okay, actually, before we start, this is maybe the most important thing I need to address - the place where you'll get the info about tags. Illustrious models use Training Data obtained from Danbooru, so they're captioned in Booru tags. Danbooru, bless them, has a page where they list and explain almost every tag on the site: Danbooru Tag Groups Wiki. It is big and a bit complex, and will take a while to get used to, but as I explain further down the process of making a prompt, I will be linking different Tag Group pages that list all the tags I'll be using.


Brackets and Undrescores

While we're here, let me talk about underscores and brackets. On the Danbooru site, to search for tags that consist of multiple words, you have to use underscores, like looking_at_viewer. In Stable Diffusion, you do not use them, so the tag above will turn into a simple looking at viewer. Brackets are a bit trickier; let's look at the tag that Danbooru recognises as 2000s_(style). In Stable Diffusion, brackets are used exclusively for Weight Manipulation (that we will talk about later), so it will not recognise the tag. To fix this, every bracket must be prefixed with a \, so the tag above will look like 2000s \(style\) in your Prompt field.


Basic Prompting: Tag Types and your First Prompt

Step 1: People Tags

Let's start with the very beginning of a prompting process. For the sake of simplicity and, to be fair, the needs of the most people getting into image gen, we'll be doing character gens. Maybe I'll talk about scenery gens some other time. The very first thing your prompt must have is something I call People Tags. These are essential and must be at the very beginning of your prompt. They are self-explanatory, and while there are quite a few of them, you won't ever need the most. Here are People Tags that will be enough for 100% of your gens:
1boy, 2boys, 3boys, 1girl, 2girls, 3girls, multiple boys, multiple girls, no people
More Group Tags
Let's start making our prompt. So far it's:

Positive Prompt: 1girl,

Step 2: Character Tags

Now, let's think about how we want our 1girl to look like. Actually, it's not strictly necessary to define the character at all: the model will come up with something on it's own. It can be useful if the exact look of the character is not important to you, for example, when you're just testing styles or generating a scenery, cases where you want to keep your Prompt Complexity low (we'll talk more about it later). Most of the time, of course, you want to define your character. The simplest way possible is by just using a Character Tag — an existing character from some media. There're multiple lists of characters from different Media on Tag Groups Wiki under "Copyrights, artists, projects and media", but the easiest way to find them is by just searching the character or media you want. If you're making a completely original character, you should skip this step entirely. For example, I'll pick a character whose tag is called makima_(chainsaw_man). In prompt, it will look like makima \(chainsaw man\), so now we have:
Positive Prompt: 1girl, makima \(chainsaw man\),

Really, you should always check Booru for a tag you're going to use. Takes a few seconds, but spares trouble from using a wrong tag. Also make sure to check what a tag actually means. solo focus, for example, mandates the image to be a multiple character image; it will mess up solo gens.

Step 3: Action Tags

Now, let's make our prompt more alive. The next part of making a prompt is defining character's actions and position. I call it Action Tags. They are mostly up to your creativity: you think of what you want your character to do, look up the tags and enter them. You can find almost every Action Tag on Posture Tags Wiki, but they're not exhaustive; feel free to try different variations of words in Search to see if the tag not documented on Postures Wiki exists. For example, there are Holding Tags, Looking At tags and more on Eye Tags Wiki, a lot of actions on Verbs and Gerunds Wiki, also 'On' Tags and probably much more. Anyway, let's get back to our prompt. I'll be using looking at viewer, sitting, crossed legs, on chair, head tilt,. It makes our prompt:

Positive Prompt: 1girl, makima \(chainsaw man\), looking at viewer, sitting, crossed legs, on chair, head tilt,

Step 4: Appearance Tags

This is where we define the way our character looks, like Eye Tags, Attire Tags, Body Tags, Face Tags, etc. Again, the easiest way to find them is to just use Search. Want Animal Ears? You just search it and find the correct name for the tag. I guess the most important thing here is, the less, the better, especially if you're using a Character Tag. If you want to get this non-original character 1:1 in their original attire, you can skip this step entirely 90% of the time, just like I'll do it here. I have some gens with detailed appearance below, so don't worry. We still have:

Positive Prompt: 1girl, makima \(chainsaw man\), looking at viewer, sitting, crossed legs, on chair, head tilt,

Step 5: Background Tags

Location Tags Wiki is your best friend here. Most of the time you'd want to specify separately, whether it's indoors or outdoors. For Simple Backgrounds, make sure that your negative prompt doesn't have simple background in it, here's the Backgrounds Wiki. For this prompt, I'll use indoors, office.

Positive Prompt: 1girl, makima \(chainsaw man\), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office

Step 6: Image Composition and Style Tags

It's one of the most important Tag Categories. Most Composition Tags are explained in Image Composition Wiki; specifically in View Angle, Focus Tags and Framing of the Body. Style tags are used fairly rarely, but you can find them in Style Parodies Wiki. Please note that Composition tags are essential. Without them, your gen might suffer a huge loss in quality. In my prompt, I'll use cowboy shot.

Positive Prompt: 1girl, makima \(chainsaw man\), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot,

Step 7: Artist Tags

This is maybe the hardest part of the prompt that I'll explain separately later. I'll won't be using them here.
Okay, I never actually made this section, so a couple of words about Artist Tags. First, you can see all the different artists here. In general, an artist should have at least 100 images for their Artist Tag to work well; I recommend aiming at artists with 500+ images. Check LoRAs section; Artist are similar in how you should use them.

Step X: Quality Tags

This is something you'll most likely have to setup once and never change again. I recommend the following quality tags:
Positive: {prompt},masterpiece,best quality,amazing quality
Negative: {prompt},bad quality,worst quality,worst detail,sketch,censor,
Depending on what you get, you may also want to negative signature, watermark, monochrome, l0l1, child, censor, multiple view, etc, but it's best to start expanding on negatives only when you get something you don't want in the gen.

Result

We finally finished going through the prompting process, and in the end we've got:
Positive Prompt: 1girl, makima \(chainsaw man\), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality
Negative Prompt: bad quality,worst quality,worst detail,sketch,censor
Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality, Negative prompt: bad quality,worst quality,worst detail,sketch,censor
Steps: 28, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 5, Seed: 3473795472, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Denoising strength: 0.39, Clip skip: 2, ADetailer model: face_yolov8n.pt, Hires CFG Scale: 5, Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,


Advanced Section: LoRAs and Artist Tags, Comprehensive Tagging, Advanced Syntax, img2img and More

This is the part I'm dreading. I have so much to write about and even more stuff I feel like I'm forgetting. This part can be messy and not extremely detailed as I'm explaining my experience and "feels" on how certain things work or impact your generation. I can be very wrong on some aspects and completely misunderstand others, so if you see anything that you disagree with or it doesn't make sense, tell me. This part will be loosely sorted, refer to the Table of Contents to find stuff you want. Here we go.


Preamble

So, let's talk about the extent of stuff I'm going to talk about here. While some things are universal to SD and Illustrious, I don't think I'll be able to completely avoid stuff specific to, say, reForge or some extensions/functions I rely on personally. Some things may not apply to your specific frontend or, especially, model. Illustrious models are generally similar to each other, but they still can have their specifics. To give you some ground to, maybe, follow me along, here are my exact parameters that I'll use to talk about everything in this section. I will touch on some of them in more detail in this section. Assume the following parameters unless stated otherwise:

  • Model: Amanatsu v1.1
  • Sampler: DPM++ 2M Karras
  • Steps: 28
  • CFG: 5
  • Assume that ADetailer is always on
  • Hires: R-ESRGAN 4x+ Anime6B. Hires Steps: 22. Denoising Strength: 0.39. Upscale By: 1.6
  • Quality Tags: Positive: None, Negative: bad quality,worst quality,worst detail

Prompt Order

Let's start with basics. again. Ideally, your prompt order should be: <Quality Tags?>, <People Tags>, <Character Tags?>, <Action Tags>, <Appearance Tags>, <Background Tags>, <Composition Tags>, <Style/Artist Tags?>, <Quality Tags?>. It's not a strict requirement, but it has it's own huge advantages. Besides keeping stuff organised, we're positioning more important stuff closer to the beginning, and the closer a prompt is to the beginning, the harder the model will adhere to it. "Adhere" here is not really similar to stuff like LLMs, for example. Here, it's more like "what will the model consider more important". Very roughly speaking, we want the model to focus on a detailed and correct character first, then assign specifics to them, and only then it will start figuring out the background and style. Of course, models do all of that simultaneously, like how it positions the character correctly according to Composition Tags, but there's still an order of More Important to Less Important. Another huge reason is preventing concepts from separating into different CLIPs when you're not intending to do it (we'll talk about them when it comes to Syntax). Since we're here, let's touch on Quality tags.


Quality tags; Do we need them?

Quality tags have been a staple of image genning since the earliest days. Adding masterpiece to prompts is almost a reflex at this point, but it's, first, a bit different with Illustrious, and, second, has it's own drawbacks. Let's start with a bit of philosophy. SD 1.0, then SD 1.5, and now SDXL always had a collection of images in the Training Data that represent either good quality, best quality, masterpiece, bad quality, worst quality, etc. Even the newest checkpoints inherit some of them; that's why these tags work, because Danbooru (from where Illustrious gets almost all training data) does not have it's own quality tags. A lot of Training Data associated with these quality tags is outdated and, well, does not fit the anime/drawing style we all are trying to get here; that's the first reason I argue for abandoning Positive Quality tags completely.

The second reason is, when we use these Positive Quality tags, we're needlessly increasing the complexity of prompt and generation with, well, Training Data of unknown quality/stylistic match with what we're trying to get, plus these tags take space that could've been used for stuff that's actually improving our image stylistically or with details/actions/etc. Okay, you get it, I don't like Positive Quality tags. What about Negative Quality tags? That's a completely separate case. When we negative worst quality, bad quality, we're not missing out on anything; it's a separate CLIP that doesn't increase the Prompt Complexity, and it doesn't make available Training Data that we want slimmer: it's just using shitty images to show the model how NOT to do. Even with that, there is stuff you should be aware of.

First, a lot of recommended Quality Tags you may find on different models' pages are, in fact, unnecessary and may make your experience much worse. Take, for example, sketch that's almost unanimously suggested as a Negative tag on almost all models. At first you may think, "well, I don't want shitty drawn images to be used as a data for my beautiful gens", and it's a rational thought; but if we take a look at these sketch images, we see that they're not "shitty", they just have a specific look and style to it that can be the exact thing you're looking for. Keep in mind that this sketch training data will be just a fraction of the overall data used for making your gen; it comes from all the other tags, like 1girl, outdoors, character tags, style tags, etc. You can think of it as, the overall look and "style" of your generation tries to reach a median value of all the different styles used for all regular tags in your image, and style tags like sketch just move it one way or another a little and do not define the overwhelming look of the image, if it makes sense.
TL;DR: be thoughtful of what tags you're using for your image, even if everybody else is using them. Think of why exactly you want this or that tag, experiment and see what gives you the best result.


Quality tags; Compilation

There are a few Quality Tags combinations I can suggest:

  • Positive: None. Negative: worst quality, worst detail, bad quality, simple background, sketch, censor
    This one is my main Quality Prompt. It's lightweight and doesn't limit your options too much, but here are a few things you should know: One, simple background in the negatives improves the quality of indoors and outdoors but completely prevents you from using "simple backgrounds" like white background, grey background, etc. It's pretty safe to use for most purposes, but keep it in mind if you decide to generate simpler backgrounds. Two, sketch in the negatives gives the images more of a "2.5D" look. It's great for when you want a more refined and clean style, but prevents some cool styles completely. See the previous section for more in-depth explanation. Three,censor. While you never want a censor in your NSFW gens, it's completely redundant in SFW gens, and I'm not a fan of redundancy. I usually take it out when genning SFW and put it back in NSFW gens. You can just have it always there and it won't impact your SFW gens much, but just know that it's there. To fight watermarks, add signature, watermark to your negatives (but avoid it if you can. You'll be cutting off a huge chunk of training data with them in the Negatives).
  • Positive: ,masterpiece,best quality,amazing quality. Negative: bad quality,worst quality,worst detail,sketch,censor,
    A solid Quality Prompt that I use for all my model comparisons, just to include all regular SD users that do use Positive Quality tags. The only issue I have with it is Positive Quality prompt, which I already explained in the previous section. Overall, a completely solid prompt.
  • Positive: ,masterpiece, best quality, amazing quality, very aesthetic, absurdres, newest,. Negative: ,lowres, (worst quality, bad quality:1.2), bad anatomy, sketch, jpeg artifacts, signature, watermark, old, oldest, censored, l0li, bar_censor, (pregnant), chibi, simple background, conjoined, futanari, (yes I had to censor one tag, hi Rentry pls don't ban).
    I used to use this prompt a lot when I was still on NTR Mix, but now I see it as a bit overwhelming. It gives the image a pretty nice 2.5D look but can have a pretty big negative impact on detail and background quality. In my opinion, a lot of tags here are redundant and unnecessary in most cases. This prompt tries to be a catch-all, but I appreciate the reverse approach much more.

Negative Prompting: How and When

Since we're on topic of Negative prompting, let's dive a bit deeper. While Positive prompting is somewhat straightforward: you just prompt for what you want to see, it doesn't work exactly like this with Negatives. You can't always go "Hm, I don't want to see this, let's Negative it"; you shouldn't, at least. Negative-ing is extremely powerful; it's basically purging all training data based a particular tag, which can result in "collateral damage" to training data you would want to see. Instead, you should do your best to find a solution based on Positive prompts; to a degree, of course. If you solve it in a tag or two - great, there's no need to add Negative prompts. If it requires more effort, or, which happens often, there's no Positive prompt you can add to solve the problem, then yes, adding negatives is worth it. They're not something to be scared of; a tag or two or five generally won't result in quality loss, but it depends. As it would happen often going on, you should always evaluate how many images a particular tag has on Booru. You should avoid negative-ing a tag that has 100k+ images, to a degree.

As the first example, let's imagine that we're generating a picture of a girl with green eyes. Even with ADetailer, models may want to turn the whole eye into a sea of green color, without pupils. There's no tag for black pupil, so there's nothing left to do but negative no pupils, and it indeed solves the issue completely.

As an opposite example, let's imagine generating an image of 1girl. It happens pretty often that when you generate an image of a single person, a copy of theirs appears somewhere in the background, especially if the background is complex. You may try to solve this by negative-ing 2girls, 3girls, multiple girls, but it just won't work. Instead, you should positive solo; that's it, the issue is completely gone. Same applies to nudity tags: it's easier and much more "gentle" to use topless, no panties, no bra, etc instead of Negative-ing shirt, panties, bra. On the same topic;


Tags; Evaluating if You Should Use a Specific Tag

Not all tags are born equal. While there are some necessary tags that always work, like 1girl, indoors or blonde hair, others may result in messy images or not work at all. Why does it happen? The reason is, some tags are just less populated than others. For example, a tag called grabbing own ankles does exist, but it only has 23 images on Danbooru, which means that using it is practically meaningless. More populated tags that count a few hundred images may work, but they may look messy, be impossible to influence or just won't work, overwhelmed by other more populated tags. A safe area for general tags starts at 1k+ images, with them working close to perfectly at 2k-3k population of images. It's a bit more difficult with characters and styles: they must have at least 2k population to even have a chance at working. Characters with 2k-ish images will often have wrong clothes, hair/eyes color, and will be completely messed up if you try to generate two characters in the same gen, for example. I'll touch on multiple character gens and how you can circumvent some of the issues later. Anyway, the morale is – always keep in mind how populated the tag you're adding is, and try to avoid tags with <1k images entirely (besides artist tags. They start working from 100+ images getting close to perfect at 500+)

Another thing is overtrained tags. Tags with 100k+ results can completely overwhelm others, even if they technically allow for another action. For example, let's take 1girl, arm up, arm behind back. Technically, these tags allow the other, but arm up is 5x more populated than arm behind back, so arm behind back may be ignored or inconsistent. You can avoid that by manipulating weight, like (arm up:0.4); more about it later.


Tags; How Including Redundant Tags Ruins Gens

Let's start with an example. We have a prompt, 1girl, 1boy, hug, sobbing, crying, streaming tears, head on chest, arms around waist, dress, medium breasts, covered nipples, green eyes, barefoot,pov. We aim for a POV image from 1boy's perspective where he's hugging a crying girl, but no matter what we do, we get either a non-POV image somewhere from the side; the guy hugging this girl with stretched hands (so it's not head on chest), or even a second boy appearing in the image; to put it simply — the result is completely different from what we prompted. What could be the issue? On the first glance, the prompt is completely fine. Before digging deeper, let's talk philosophy. Why are we even prompting for stuff? So that we could see it, of course. It's more true than ever for the AI model; if you prompted for something, the model will try it's hardest to include it. It pulls up all the training data corresponding to the stuff you prompted and uses it for the generation. Let's get back to our prompt. What we're actually expecting is an image of a girl in the POV's arms, burying her head in his chest, which means that it would be from above and/or height difference/size difference, and also means that we would be unable to see that she has covered nipples, green eyes,streaming tears, that she's barefoot or has medium breasts. When we prompt for something, the model will include it, going so far as to disregard other parts of the prompt. If something should not be seen on the image, do not prompt for it.


Tags; Tag Bleeding, and How To Avoid It

It's a fairly simple section, but it would be wrong not to include. Let's start with an example. We're prompting for 1girl, smirk, standing, against wall, green hair, ... ,. Huge chances are, we're going to not only get green hair, but also green eyes, dress, some accessory, or something green in the background. It's not only with hair, really, but with any mention of any color. The solution is fairly simple; we just specify specific concepts that got affected by the bleeding. If with green hair we also got green eyes, just prompt for blue eyes. The fact of multiple colors in the prompt lowers the chance of Color Tags bleeding by itself. Sadly, Color Tags bleeding is the simplest of bleeding tags. I cannot possibly account for every case of Tag Bleeding, but, in general, simply specifying the part that got affected by bleeding should be enough. I'll touch on it more when I get to Syntax, CLIPs and BREAKs.


Tags; How Tags Impact Seemingly Unrelated Things

As always, an example. We have a prompt, 1girl, lying, on bed, presenting, seductive smile, underwear,large breasts, ... ,. Here, we want to focus on the large breasts tag. The gen will look like an adult (in anime terms, at least) woman. If we swap this tag for small breasts or flat chest, we would notice that the character suddenly started looking much younger than before. This is an example of how tags affect unrelated things, in this case, it's the age. The reason is pretty obvious: a lot of images that have small breasts and flat chest tags are also l0l1 and child images. Same for school uniform and petite, for example. In this specific case, we can solve this by adding l0l1, child into the Negatives, or adding mature female or stuff like milf if that's what you're going for (in this case, the solution using Negatives and Positive are both viable and used for completely separate things). This is just one example of this behavior. Another example would be using Character Tags. Some media have very distinct art-styles that will be brought into the gen; characters from stuff like Steins;Gate or OMORI will bring some of their style, and there's little to nothing you can do about them. The morale is, when you're thinking on what tags you should use, also keep in mind what kind of other aspects it can bring into your gen.


Tags; Rating Tags and How to Avoid (or Embrace) Horny

Rating Tags are extremely powerful instruments that... you will rarely use, really. You can see their exact definitions on the Rate Wiki, but in general: There are four rating tags, general, sensitive, questionable, explicit. First is no nudity or suggestive imagery, second is no nudity but with some suggestive stuff, third is no genital nudity (breasts are OK) and no sex, and fourth is genital nudity and/or sex and/or gore and/or other extremely graphic shit. You can use them accordingly, but they should have a purpose, as with everything. If you're going for a completely SFW gen but the model keeps throwing lewd shit at you, you can try using general, for example, and so on. One thing you should never do is negative one of them: they're some of the most populated tags in the training data, and negative-ing one of them will have a negative impact on the gen. With NSFW, there's really no need to use explicit or alike ever, as you can just specifically prompt for what you want.


Tags; How Models Won't Do Anything Unless You Ask Them

An example. We have a prompt, 1girl, undressing, averting eyes, shy, blush,open shirt, ... ,. If we try to gen it, we would quickly find out that no matter how much we regen, the model will never generate nipples and will be extremely reluctant to even show breasts, despite open shirt seemingly implying it. That's where we get into the area of introducing concepts to the model. No matter how smart image gen model may seem, they still do not understand connections between things that are obvious to us. In the example above, to actually get a look at tiddies, we have to specify; it's either breasts or some version with a size like large breasts, and nipples to actually see them. Without using both tags, the model would have almost never actually generated partial nudity we want. It's very similar with NSFW sex tags; with sex, you should always add stuff like nude if nude, penis and/or pussy if sex or other sexual activities. It's pretty similar to what I explained in Redundant Tags: you prompt what you see, exactly like you want to see it; but, just like in Redundant Tags, you should prompt only for necessary stuff. I'll explain Iterating gens a bit later.


Tags; Conclusion

What I listed is in no way exhaustive, but I hope it got the point across: you should have a very specific way of Thinking with Tags™. You should evaluate what the tag you add will bring besides it's main function; how effective it would be; will it have negative impact on the gen; is this tag necessary to get what you want; and, most importantly, that you should experiment. Try stuff out, spend time tinkering with tags, fail over and over again to actually get what you want and learn something new. Try non-Booru tags, some of them do work (or so people say), combine tags, see how they interact — create. While image gen is not a real form of art, it's similar to one in how you use your creativity; just instead of drawing, you learn to combine things to get amazing results.


Prompt; CLIPs, and Why You Should Follow The Prompt Order

Let's talk about how your prompts are read by the model. I won't be technical at all as I have about 0 idea how it actually works, but there is practical stuff to know about. If you pay some attention, you may notice an indicator at the top right of both Positive Prompt and Negative prompt that says something like 0/75. This 75 is the token count that make a single CLIP. Roughly speaking, as you press "Generate", Stable Diffusion processes your prompt in chunks of 75 tokens. Once you exceed these 75 tokens, 75/75 transitions into 78/150 (for example), and another CLIP is created. CLIPs are processed separately and then combined. Usually, it's not something you have to worry too hard about. Even if you exceed the first CLIP, prompt from the CLIP still gets correctly applied to the gen.

Stuff suddenly gets more complicated once your prompt becomes complicated. For example, we have a prompt 1girl, ... , holding sword BREAK katana. Here, BREAK symbolizes a transition between CLIPs. If holding sword and katana were in a single CLIP, the model won't have a second thought that the sword that's being held is indeed katana. In a case where two tags are separated in different CLIPs, the model might get an idea that holding sword and katana are unrelated to each other, thus making a separate sword somewhere in the gen. While you won't (usually) get a catastrophic problem from CLIPs, it's a nice habit to follow the Prompt Order. Nothing bad will ever happen if artist or style tags are separated, same for backgrounds. By following the Prompt Order, you make yourself safe from unnecessary issues you may get.

While having an idea about CLIPs is not that important by itself, it's necessary to understand the BREAK separator; an extremely important and sometimes life-saving piece of SD syntax. We'll talk about BREAKs in detail later.


Syntax; Introduction

We're all used to separating tags by , commas, but it's not the only piece of Stable Diffusion syntax there is. In fact, Stable Diffusion is extremely versatile in how you can manipulate your whole prompt, separate tags and even change the generation on the go. Before we actually get into it, a small warning. Most of the time, you don't need advanced syntax. It will only make stuff worse and needlessly complicated. If you don't want to spend time tuning precise settings and genning dozens or even hundreds of faulty generations, you shouldn't approach this. Most things can be done with good enough prompting anyway. However, there are some use cases where syntax is extremely useful, and a few situations where it's absolutely necessary.

IMPORTANT NOTE: When it will come to practical use and examples, most of what I'm talking about from here on out is empirical and can be extremely different from model to model. Some models are simply better at following your prompts and doing complex scenes than others. I can be wrong and make mistakes; some methods I describe are inconsistent, others got little to no practical use by me or literally have a single use-case. I'll try to be detailed, but there won't be a single solution for all of your problems. Keep that in mind.


Syntax; Separators

So, we all know and love the , comma. It is used to put distinguish parts of the prompt for the tokenizer. I honestly searched a lot for the use-cases of different separators, but, well, had 0 luck. I tried them, and, well, didn't see much of an impact. There is a situation where I do use them, but I'll get to it later. For the sake of completionism; Stable Diffusion supports the following separators (besides comma):

  • . Period. Some sites describe it as a "Hard Separator"
  • ; Semicolon. The same sites describe it as a "Hard Separator" as well???
  • ! Exclamation Mark. Some sites claim that it's there to "convey a sense of emphasis"??????
  • Newline (just pressing enter). The only thing (besides a comma) that had some impact on the gen for me. You can try to use it to fight tag bleeding.
    Honestly, the only real advice I can give is; just stick to commas and try putting a newline if you need to fight the tag bleeding and nothing else helps.

Syntax; Prompt's Weight Manipulation

Just typing a tag is not the only thing you can do. You can also manipulate how powerful a tag (or LoRA) is in relation to other tags. There are a few ways to do that:

  1. Using brackets ( ). A pair of round brackets around a part of the prompt increases it's weight by 1.1. For example, no humans, city, cityscape, scenery, (lamppost). Here, (lamppost) has a strength of 1.1. You can combine these brackets, for example (((lamppost))) is a weight of 1.1*1.1*1.1 or 1.1^3, which is 1.33.
  2. Using square brackets [ ]. A pair of square brackets around a part of the prompt decreases it's weight by 1.1. [lamppost], for example, has a weight of 0.9. [[[lamppost]]] has weight of 1 / 1.1 / 1.1 /1.1 or 1.1^-3, which is 0.75.
  3. Using a colon : . With a colon, you can accurately define the weight of a tag. For example, (lamppost:0.5) has a strength of 0.5. This is a way I recommend sticking to. LoRAs follow a similar pattern; it will look something like this: <lora:lora_Name:0.5>, where :0.5 is the strength. Be mindful of brackets that are a part of the prompt, horror_(style) for example. With a modified weight, it will look like (horror \(style\):0.5).

When should you resolve to this and how? In my opinion, it should be used to, A: Fight Tag Bleeding, and B: Fight Tags that are overwhelming other tags, like in the Evaluating if You Should Use a Specific Tag section. It's quite straightforward: you just lower the weight of the tag in question and see if it helps. It's mostly trial and error, and there are no fixed solutions for everything, so I leave it you you.


Syntax; BREAK is love, BREAK is life

It's a little bit difficult to approach this section, but let's start from understanding what it does. Let's have an example, no humans, building, house, dusk BREAK door, window, lamppost, bench,,. Here, I use BREAK to separate two parts of the prompt into different CLIPs, so that no humans, building, house, dusk (First CLIP) and door, window, lamppost, bench, (Second CLIP) are processed (somewhat) separately. In general, first CLIP is much more important than the second: image's composition is about 70% defined by the very first CLIP. What does it achieve?

Multiple (Defined) Character Gens without Extensions

First, it's the most powerful way to fight Tag Bleeding, and second, it allows you to define different concepts/characters completely separately. For example, in a prompt like 2girls, looking at viewer, smile, side-by-side, hand on another's shoulder, red shirt, jeans, fox ears, orange hair, cowboy shot, white background BREAK black hair, sundress, pointy ears, it's the only way to get a satisfactory result. What to note:

  1. Concepts applicable to the whole image must be in the first CLIP. In this case, it's 2girls, looking at viewer, side-by-side, hand on another's shoulder, cowboy shot, white background.
  2. You must use an action that involves multiple characters. Here, it's side-by-side, hand on another's shoulder. It'll be much harder to achieve anything meaningful without something to involve all characters. (note: it is possible to define the active participant by having the action tag in the corresponding character's part of the prompt. Here, it's 9/10 times the girl with fox ears doing the hug, and if we move hand on another's shoulder to the second CLIP, it will be an elf.)
  3. Keep overprompting to absolute minimum. Here, an extra animal ears to the first part of the prompt or elf to the second will ruin the gen. This is where manipulating weights will come very useful.
  4. If you define anything appearance-wise with the first character, do the same to the other. Here, we defined that the girl with fox ears has red shirt, jeans, orange hair,. It means that we also should define same things for our elf, so we add black hair, sundress to the second CLIP.
  5. Do not expect it to work perfectly. Sometimes you get a 90% consistency, this prompt works about 50% of the time.
    That's basically it for simple 2 character gens. With 3 characters, especially if the prompt gets more complex, we get into an almost esoteric territory, so I'll talk about separately later.
    Metadata: 2girls, looking at viewer, smile, side-by-side, hand on another's shoulder, red shirt, jeans, fox ears, orange hair, cowboy shot, white background BREAK black hair, sundress, pointy ears, Negative prompt: extra ears, worst quality, bad quality
    Steps: 29, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 5, Seed: 687164227, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Clip skip: 2, ADetailer model: face_yolov8n.pt

Syntax; Prompt Merging, Delaying Prompts and Keeping the Quality with Styles

Let's discuss some in-built Stable Diffusion scripts. One of the more useful is a Prompt Switch script that looks like this: [A:B:x], where A is the first prompt, B is a second prompt, and X is a fraction of total steps at which the switch occurs. For example, we have [pixel art:2000's \(style\):0.2]. The translation of it would be: First 20% of the gen, the active prompt is pixel art. At 20%, it's replaced with 2000's \(style\) and stays like this until the end. What it means in practice? The overall composition of the gen is almost completely defined by about 2-3 first steps (With Euler or DPM++ 2M. It can vary based on the Sampler and Scheduler). By using [A:B:x], we not only merge two styles into one, but also choose which style does the overall image composition; we're talking little details, the background, the look of the character, the quality of the image. This is where we can do another thing.

In [A:B:x], both A and B can be completely empty. Let's just not enter any A. We'll get: [:pixel art:0.2]. First, some theory. When you're using a Style Tag, an Artist Tar or a LoRA, it absolutely always has a negative impact on the gen's quality. With just a single Style or two styles at reduced weights, this impact is negligible. However, if an Artist or Style in question doesn't have enough images (<100 with Artists, <1k for Styles); or we start adding 3+ Styles/Artists/LoRAs, the gen's quality will get noticeably worse. One of the reasons is because the model will just have no idea how to make the image's composition with given Style Prompt. To spare the model from it, we can just have the Style Prompt disabled completely during the first few steps of the gen. With [:pixel art:0.2], it only kicks in once the composition is already completely finished, and all it does is what it's supposed to do: apply style to the gen. Note that 0.2 is an arbitrary number; you may want to test in range from 0.15 to 0.5.

[A:B:x] also supports applying weights and entering multiple prompts. For example, [:(pixel art:0.5), (2000's \(style\):0.3):0.2]. You can also merge and delay LoRAs: [:<lora:loraA:1.0>:0.4] and [<lora:loraA:1.0>:<lora:loraB:0.5>:0.5].


Syntax; Prompt Addition

When it comes to solo gens, you can also sometimes add one tag to the other. It's mostly unpredictable and there are often better ways to achieve stuff, but it's an option. For example red hair, blue hair has a chance of generating a hair color that's between red and blue. It can also generate a multi-colored hair, but it's easier to just specify for multicolored hair (+ you can pick a specific type of it from here) and have a prompt like red hair, blue hair, multicolored hair.

There is another way to specifically merge tags using | pipe. For example, (red hair|blue hair:1.0). What pipe does is, it switches from one prompt to the other each step. Whille it is a way to specific get a blend between A and B, it is unreliable. This is where we can return to [A:B:x]. For example, [red hair:blue hair:0.5]. This is a much more reliable and better way to achieve good merging. There are also other ways to use it, for example red hair, [:blue hair:0.2] and such, but I leave it to you.


LoRAs

LoRAs, or Low-Rank Adaptations, are a cheap and reliable way to impact the generation right on the model's technical level, basically finetuning it on the fly. I'm not the biggest LoRA enthusiast out there, so I'll be somewhat short.

First, with Illustrious, I suggest using LoRAs only for styles/detail. I tried a few, like, pose and "character appearance" LoRA, and, well, just using Booru tags seems like a better option both quality and versatility-wise. It's just my opinion though.

Second, do not use more than two style LoRAs at the same time, at least without making them skip first few steps. Without trickery, I recommend keeping the overall weight for two LoRAs below 1.1-1.3, for example, lora_A would have a weight of 0.7, and lora_B will have a weight of 0.4, making the total weight 1.1. These settings seem like they give the most style while keeping the quality basically the same as without LoRAs. Honestly, the very same goes for Style Tags. With Style Tags + LoRAs (that are applied like I suggested previously), a total of three seems stable. Just be mindful of weights.

That's mostly all I have to say. LoRAs can be of various quality and made for different purposes/models, so it's just trial and error. You just apply them, tinker with weights and see if the results are satisfactory.


Generation Parameters; Introduction

Congrats on making it this far (yes, I'm running out of ideas on how to begin sections). For most intents and purposes, you can simply setup your parameters once, never change them again and be happy. I consider "good" parameters to be:

  • Sampler: DPM++ 2M Karras
  • Steps: 28
  • CLIP Skip: 2
  • CFG: 5
  • Resolution: 832x1216 (or reverse)
    And you kinda never have to change them, they are just good. But there are cases when you'd want to try something new.

Generation Parameters; Align Your Steps, and Full-Quality Gens in Just 12 Steps

Available on most local frontends, there is a Scheduler called Align Your Steps, or AYS for short. AYS uses some ancient wizardry to make the gens in 10-12 steps that look the same as 28 steps Karras or Euler a. With parameters, I suggest using DPM++ 2M Sampler, regular Align Your Steps scheduler and from 10 to 12 steps. I don't think that AYS is negatively impacted by CFG, so I just keep it at 5 and go lower as needed.
Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality
Negative prompt: bad quality,worst quality,worst detail,sketch,censor
Steps: 12, Sampler: DPM++ 2M, Schedule type: Align Your Steps, CFG scale: 5, Seed: 3757563295, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Clip skip: 2, ADetailer model: face_yolov8n.pt

Keep in mind that while AYS is great for simple gens, it can start messing up as you increase the complexity of your gen. The most noticeable issue I had is AYS merging hands of multiple characters into a single blob. Besides that, AYS doesn't have any artifacts and looks incredible. It works with LoRAs, Styles and Artists with no issue. On my hardware (RTX 3070 8 GB), it lowers the generation time from 15-20 seconds with Karras and 28 steps to 9-10 seconds with AYS.


Generation Parameters; Extra-Quality Gens in 60 Steps

There are two notable Samplers that give you some beautiful gens without Hires and sometimes even replacing ADetailer. These are, DPM++ 3M SDE Exponential and IPNDM Automatic. Both require at least 40 steps, and I consider 60 steps to be a sweet spot, sometimes you may need up to 80. In my experience, IPNDM is better than DPM++ 3M SDE; IPNDM shows much less artifacting than DPM++ 3M SDE, which is understandable, given the nature of SDE (SDE adds some noise each step. More variety, less stability), but I still use it occasionally. In some cases, these two Samplers can even replace Hires, especially given that Hires is much slower. With some gens, faces without ADetailer look even better than with it, and it's especially mindblowing.
Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality, Negative prompt: bad quality,worst quality,worst detail,sketch,censor
Steps: 70, Sampler: IPNDM, Schedule type: Normal, CFG scale: 5, Seed: 3757563295, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Clip skip: 2, ADetailer model: face_yolov8n.pt


Generation Parameters; LCM, and Draft Gens in Just 5 Steps

Note that to use LCM, you must download it's LoRA and include it in the gen at weight 1. I suggest using LCM sampler with SGM Uniform scheduler. Your CFG must be in range from 1 to 1.5, I suggest the latter. LCM gives you perfectly fine gens extremely quickly, but they have significantly less detail than usual + there can be occasional artifacting.
Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality <lora:LCM_LORA:1>, Negative prompt: bad quality,worst quality,worst detail,sketch,censor
Steps: 5, Sampler: LCM, Schedule type: SGM Uniform, CFG scale: 1.5, Seed: 3757563297, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Clip skip: 2, ADetailer model: face_yolov8n.pt


Generation Parameters; IPNDM_V, My New Favourite Sampler

Found out about it recently, but it's just incredible. This sampler gives better quality at 40 steps than DPM++ 3M or regular IPNDM do at 80+ steps. First, make sure to use either Karras or Exponential schedulers. I recommend sticking to 40 steps, 30 is also viable, and technically you can go all the way down to 15 steps. Do not go above 40 steps; the image will get worse or break down.
Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, yellow eyes, indoors, office, cowboy shot, masterpiece,best quality,amazing quality
Negative prompt: no pupils,, worst quality, bad quality, simple background,
Steps: 40, Sampler: IPNDM_V, Schedule type: Karras, CFG scale: 5, Seed: 3757563295, Size: 832x1216, Model: Amanatsu_v11

And another; Note that it has no ADetailer;

Metadata: 2girls, looking at viewer, smile, side-by-side, hand on another's shoulder, red shirt, jeans, fox ears, orange hair, cowboy shot, white background BREAK black hair, sundress, pointy ears
Negative prompt: no pupils,, worst quality, bad quality, simple background,
Steps: 40, Sampler: IPNDM_V, Schedule type: Karras, CFG scale: 5, Seed: 54443983, Size: 832x1216, Model: Amanatsu_v11


Generation Parameters; Hires

Hires is an incredible feature to get higher resolution gens with increased detail. I suggest the following parameters:

  • Upscaler: R-ESRGAN 4x+ Anime6b OR Lanczos
  • Hires Steps: 20-22
  • Denoising Strength: 0.39-0.43
  • Upscale by: 1.5 - 1.65
    Metadata: 1girl, makima (chainsaw man), looking at viewer, sitting, crossed legs, on chair, head tilt, indoors, office, cowboy shot, masterpiece,best quality,amazing quality, Negative prompt: bad quality,worst quality,worst detail,sketch,censor
    Steps: 28, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 5, Seed: 3757563295, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Denoising strength: 0.39, Clip skip: 2, ADetailer model: face_yolov8n.pt, Hires CFG Scale: 5, Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B

Generation Parameters; Refiner

Refiner means that the first x fraction of steps is done by one model, and then it's replaced with another model. It's trial and error, so there's not that much to talk about. One thing I want to note is, remember that the overall composition of the image is done by the first model, not the refiner model. You can have the first model to 2-4 steps and then switch to another model; as a result, the image's composition is done by model A and the whole style is done by model B.


Prompting; Iterating on Your Gens

This part is more of a philosophical view on how you should do prompting. I like to call it Iterative Prompting. The main point is, you start from drafting the overall image, and then focus on each part of the image, following the Prompt Order. You refine each part of the prompt, generate the image, reflect on the result and keep refining the prompt until you get the image you want consistently, after that you enable all the resource-intensive stuff like Hires or High-Steps Samplers and play with styles. Let's have a big example.

  1. I come up with an overall idea of the gen. Let's start with an idea: a creepy/dangerous punk girl in a darker style. After some thinking, I come up with this prompt: 1girl, looking at viewer, head tilt, raised eyebrow, smirk, holding cane, red jacket, open jacket, long hair, multicolored hair, dark, darkness, city, high contrast, and gen.
  2. I look at the gens. While high contrast definitely gives an interesting look, the overall image is too dark. smirk is also not optimal, her expression is really weird in a bad way. Besides this, her overall look is pretty close to what I wanted, so I also need to add Composition tags. Reflecting on this, I edit the prompt, 1girl, looking at viewer, head tilt, raised eyebrow, (smirk:0.5), (grin:0.5), holding cane, red jacket, open jacket, long hair, multicolored hair, dark, darkness, city, (high contrast:0.7), cowboy shot, close-up, and generate it.
  3. I'm already quite happy with the image, but I have some other ideas I want to try. I'd like to have more character focus, so I remove background tags. I also want even less dark image, so I lower the weight of dark. New prompt is: 1girl, looking at viewer, head tilt, raised eyebrow, (smirk:0.5), (grin:0.5), holding cane, cane, red jacket, open jacket, long hair, multicolored hair, (dark:0.5), darkness, (high contrast:0.7), cowboy shot, close-up,. Time to gen
  4. First, she's tilting her head too much. Second, to make her more visually interesting, I add hererochromia, then specify two hair colors and rely on Tag Bleeding to also do eye colors. To add more to the composition, some slight dutch angle. Prompt turned into: 1girl, looking at viewer, (head tilt:0.7), (smirk:0.5), (grin:0.5), holding cane, cane, red jacket, open jacket, heterochromia, long hair, white hair, red hair, multicolored hair, (dark:0.5), darkness, (high contrast:0.7), cowboy shot, close-up, (dutch angle:0.3), and I gen it.
  5. At this point, I feel pretty happy about the result, so I start adding style and LoRAs, as well as doing some minor edits. New prompt: 1girl, looking at viewer, (head tilt:0.7), (smirk:0.5), (grin:0.5), holding cane, cane, red jacket, open jacket, heterochromia, ringed eyes, long hair, white hair, red hair, multicolored hair, (cowboy shot:0.7), close-up, (dutch angle:0.3), traditional media, <lora:illustrious_quality_modifiers_masterpieces_v1:0.7> <lora:illustriousXL_stabilizer_v1.72:0.35>
  6. Not specifying backgrounds and having simple background in the negatives really ruined the gen, so I fix it: 1girl, looking at viewer, (head tilt:0.7), (smirk:0.5), (grin:0.5), holding cane, cane, red jacket, open jacket, heterochromia, ringed eyes, long hair, white hair, red hair, multicolored hair, dark background, (cowboy shot:0.7), close-up, (dutch angle:0.3), traditional media, <lora:illustrious_quality_modifiers_masterpieces_v1:0.7> <lora:illustriousXL_stabilizer_v1.72:0.35>
  7. Now I feel completely happy. I take the seed of my favourite gen and reuse it, this time with Hires. Woila, we're done.
    Metadata: 1girl, looking at viewer, (head tilt:0.7), (smirk:0.5), (grin:0.5), holding cane, cane, red jacket, open jacket, heterochromia, ringed eyes, long hair, white hair, red hair, multicolored hair, dark background, (cowboy shot:0.7), close-up, (dutch angle:0.3), traditional media, <lora:illustrious_quality_modifiers_masterpieces_v1:0.7> <lora:illustriousXL_stabilizer_v1.72:0.35>, Negative prompt: worst quality, bad quality,
    Steps: 28, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 5, Seed: 1003245552, Size: 832x1216, Model hash: 0842c2a1d8, Model: Amanatsu_v11, Denoising strength: 0.39, Clip skip: 2, ADetailer model: face_yolov8n.pt, Hires CFG Scale: 5, Hires upscale: 1.6, Hires steps: 22, Hires upscaler: R-ESRGAN 4x+ Anime6B

From start to finish, this generation took 50 images to refine the first prompt into the last. This is the way I do all my gens, and I highly recommend following the same principle. It's extremely easy to mess up at the very beginning and spend hours trying to fix it; I've been there.


Prompting; Prompt Complexity

This is a quite ephemeral thing that I think is extremely important. I consider Prompt Complexity to be this: it's the amount of separate concepts you require the model to understand and generate correctly. Things like Styles are not separate specific concepts, so they're not increasing the complexity. Appearance tags on solo gens have no other interpretation than, well, that they're worn by this character, so it's also not an issue. Things start getting fun once you start doing complicated stuff. Asking for two separate actions by a single character relies on the model understanding how it's going to look, so it increases the complexity a lot. Defining two separate characters makes the complexity skyrocket: the model has to associate different characteristic (that are technically applicable to both characters) to separate characters. It's fine if you assign actions that require multiple characters, but stuff like black hair, pointy ears, animal ears can be assigned to either, and the model will most often just make both characters like this. Resulting from this, Prompt Complexity is a sum of every Tag Bleeding, Redundant Tags, Overtrained and Undertrained Tags, as well as ambiguous tags. The less complex your prompt is, the more consistency and quality you get. You should always look for opportunities to make your prompt simpler, to use a more specific (but still populated) tag, replace multiple tags with just one, avoid redundancy and bleeding. It's a nice habit that may save hours of work.



Conclusion

I think that's all, besides img2img and ControlNet, but I have literally 0 strength left to investigate and document them as well. Maybe some other time. That's basically months of my experience with Stable Diffusion and Illustrious, dozens of hours spent clicking letters on the keyboard, smashing "Generate", sucking ass, gens failing, remaking prompts, finally coming to understand shit, experiment with basically everything there is, testing models, prompts, interactions between tags, shit like that. Besides my dramatic mood right now, of course I didn't talk about everything; it's impossible for one man to do, and I'm not aiming to. I really hope that this guide was at least a little bit useful, and that you learned something new. I wish you luck and all the best; be creative, make stuff, think for yourself, be happy and take care.


Special Thanks

My wholehearted gratitude goes to folks on Aegis City Discord; thanks for your support and encouragement, as well as showing me stuff and giving ideas. This guide wouldn't have existed without you. Shoutouts to:

  • @StatuoTW - for your gens, ideas, and creating this beautiful community.
  • @Siberys - for ideas, discussions on different image gen topics, and support.
  • @11yu - for huge help with Samplers and Schedulers, as well as general knowledge.
  • @Corgi - for help with Tag Bleeding and Overtraining.
  • @MadDetective - for help with tags.
  • @Skelly - for support.

I really fucking hope I didn't forget anyone. Tell me if I did. Bex out.


To-Do List

  • img2img and Inpainting
  • ControlNet, specifically Style Cloning
  • Hyper-LoRA section
  • Rewrite Installation section to feature ForgeUI instead of reForge.
  • Mention Infinite Image Browsing Extension. It's incredible.
  • Make a Hands Manifesto. Fuck you hands.
  • LatentModifier.
  • IPNDM_V section; simply the best Sampler I ever tried. Credits to @11yu
  • DEIS SGM Uniform section.
  • More Screenshots
  • Expand the section on multiple character gens. I'm absolutely sure there's a way to get 2/3 characters gens consistent, I just need to find the exact formula.
  • Get a life (will never happen).

Update History

  • 09/04/25 - v0.1. First draft
  • 10/04/25 - v0.2. Proof read + added images. Released for a preview.
  • 12/04/25 - v1.0. Full release.
  • 14/04/25 - v1.1. Added IPNDM_V section. Noted about reForge's cease of development. Rewriting the Installation guide for ForgeUI TBD. A few fixes.
Edit Report
Pub: 26 Mar 2025 23:09 UTC
Edit: 15 Apr 2025 19:50 UTC
Views: 190