Simple guide to upscale with multi diffusion

In this guide, I'll guide you through a simple workflow to get more details out of initial gen. I won't go into technical details (you can read about them in various guides) and will get straight to the point .

Simple guide to upscale

Because SD models were trained on 512x512 images, generating too far away from that resolution can cause trouble, So at first, we will make an small image with in txt2img

When you found the seed you like, we will upscale it with hires fix. Alternative you can send it to img2img and upscale it high denoise if you know what are you doing but in this guide I'll use hires fix:

When you satisfy with result, send hires fix image to img2img.

Go to setting, stable diffusion and check this option if you haven't done it already.

Back to img2img,
Only use quality tags in your prompt. Before you yell at me, "these are just placebo", again I know it's not optimal this is different from model to model. Feel free to play with prompt but remember just input quality tag ones.
For negative, we can get away with low denoise, whilst embeddings are helpful, only put the ones that you trust.

This is the setting we will use, it's not optimal, more like a just werk one.
Final resolution size and latent size should be divisible but not necessarily.
IMO, multi diffusion method yells better result than mixture of diffuser but it's hard to get the right settings, and Gen time is slower, feel free to experiment.

We will start with relatively large latent size; smaller sizes do improve quality but it's very troublesome. Because multi diffusion split scene into tiles, SD uses same prompt for every tile. For example: some tiles don't have 1girl, just simple background in it but the prompt always have 1girl, guess what going to happens in these tiles? new 1girl will be generated.
You can change tile size to non-squared one as workaround, but I won't go into details.

Start with 20-24 overlap. Increase if see artifact.
Default tiled VAE settings works for 8GB vram settings. Increase for faster speed in cost of more vram.

And that's it. Feel free to play with settings.

How to troubleshoot

Before upscale, inpaint minor details like hands and stuff, things make zero sense (floating light bulb, 3 legs, disconnected regions, etc...). Upscaling likely won't fix them.

Final image ocasionally will come out overcooked, oversaturated, overexposured. It's because of upscaler and encode fast mode. Here are few things you can do, in this order:

  • Disable fast encoder or enable color fix or do both.
  • Change upscaler
  • Reduce cfg scale
  • Change VAE
  • Reduce weight of quality tags. Remove Lora from prompt.
  • Use photo editor to fix color

If scene is an abomination, that because tile size is too small AND overlapping is also small, as you can guess SD run your prompt to every tile and combine them. If tiles too small and number of overlaps too little, result is different in every tiles. Tile sizes and overlap settings depend on final resolution because bigger scene = more tiles are used, more tiles = more issues.
Rule of thumb:

  • Bigger tile size = less quality but less issue, also shorter gen time.
  • More overlap = Longer gen time but less issue.

optimization

  • The recommended denoise strength is 0.2-0.35, more strength more details but image will also be different from original. Though you can but it's hard to get away with >0.4 with small tiles size.
  • High cfg yells better result but can burn image. I only tested with Euler cfg 10-14 and DDIM 8-10.
  • You can add something that vaguely describe the scene for better result, even concrete objects. But concrete objects going to add unwanted contents, depend how complex the scene is. So, you know what to look for when this happens.

technical stuff

https://rentry.org/sdupscale
https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111

Edit
Pub: 15 Apr 2023 22:23 UTC
Edit: 15 Apr 2023 22:53 UTC
Views: 146