/ldg/ Wan 2.1 Install and Optimization Guide

This guide aims to implement every available optimization to maximize the speed of video output generation in Wan using the Q8 models. Achieving this involves some trade-offs in quality, but you can easily disable any of the optimizations if you prefer to prioritize quality over speed.

The included guide and workflows are tailored for GPUs with 24GB or more of VRAM, typically utilizing 21-23GB during inference. While it’s possible to use a GPU with less than 24GB, you’ll need to make adjustments: either download a model quantized below Q8 or increase the virtual_vram_gb setting to avoid exhausting your available VRAM. However, be aware that swapping more blocks will slow down generation, and using a lower quantization level will reduce the quality of your outputs.

Prerequisites

ComfyUI Portable
ComfyUI Manager
CUDA 12.6

Installation Steps

  1. Ensure that ComfyUI is updated to the very latest version. (update_comfyui.bat in ComfyUI_windows_portable\update)
  2. Download these models. If you have less than 24GB of VRAM, you could also swap out the Q8 models for Q6/Q5/Q4, though you'll see a progressively larger drop in output quality the lower you go.

Do NOT use Kijai's text encoder files with these models! You MUST use these text encoders or it will error out before generating with Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120)

  1. Open a cmd.exe prompt in ComfyUI_windows_portable\update

    ..\python_embeded\python.exe -s -m pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

  2. Download and run this as instructed to automatically install Triton and Sage.
  3. Open a cmd.exe prompt in ComfyUI_windows_portable\update and run the following command. It might complain or give you an error about torchaudio either during the last step or when you start Comfy. If so, ignore it, it won't effect video gen:

    ..\python_embeded\python.exe -s -m pip install --pre --upgrade torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu126 --force-reinstall

  4. Go to this repo and download film_net_fp32.pt, placing it in ComfyUI\custom_nodes\comfyui-frame-interpolation\ckpts\film
  5. Download these workflows. They're based off an anon's workflows from /ldg/, with various additional features. The current settings are sourced from places like the github repos, /ldg/ and my own personal use of the model, but I don't claim it's perfect or anything. I mostly offer it as a baseline. If you do find better settings or think it could be improved with other extensions/nodes, post them on /ldg/ and mention "rentry".
  1. Edit run_nvidia_gpu.bat in ComfyUI_windows_portable and change the first line to this :

    .\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --use-sage-attention --fast

  2. Run ComfyUI. Look in the cmd.exe console window and make sure pytorch version: 2.7.0.dev20250302+cu126 is shown during startup. The actual date of the version may vary, the important thing is that you're running 2.7.0dev+cu126 and the version is 20250226 or newer . You should also see Enabled fp16 accumulation and Using sage attention.
    There's a possible bug when you update extensions or restart which reports an incorrect version of pytorch. If that happens, close Comfy and restart. This seems to happen most often if you use the "Restart" button in comfy after updating extensions, so close it manually and start it up manually after updating extensions. If upon a second restart it still isn't 2.7.0dev, do step 5 again.
  3. Open one of the workflows. Open Manager and install Missing Custom Nodes. Then install the ComfyUI-GGUF extension.
  4. Make sure that every time you start Comfy, pytorch version reads 2.7.0dev or fp16_fast won't work and your gen times will be slower than they otherwise would be.
  5. Run your first gen. If it freezes during model loading with "Press any key to continue" in the cmd.exe window, you need to restart your computer. If you get this error when running the workflow :

    ImportError: DLL load failed while importing cuda_utils: The specified module could not be found.

    Go to \users\username\ and open the .triton directory. Delete the cache subdirectory inside of it.

You're done!

Important Notes Before You Gen

I've set t2v to generate at 50 steps and i2v at 40 steps, which are the maximum recommended values for each. Going beyond these limits doesn’t yield any meaningful improvement. If everything is set up properly, this should take about 11 minutes on a 3090 and deliver the best optimized quality for 480p output. For faster generations, you can lower the step count or play with the teacache setting.

The initial generation time you get (about 25mn on a 3090 with all optimizations enabled) is NOT accurate. Teacache kicks in at 20% of the run, and Adaptive about midway through.

When a video finishes generating, you'll get two files in their own i2v or t2v directories and subdirectories. The raw files are the 16 frame outputs while the int files are interpolated to 32 frames which gives you much smoother motion. The original raw files are there in case you want to interpolate in a different program.

If you want to use the 720p i2v model, you'll need to play with virtual_gb. On a 3090, you'd set it to 12.

NEVER use the 720p i2v model at 480p resolutions and vice versa. If you use the 720p i2v model and set your res to 832x480 for example, the output you get will be much worse than simply using the 480p i2v model. You won't ever improve quality by genning 480p on the 720p model, so don't do it. The only model where you can mix resolutions is the t2v model.

Some anons have claimed that fp16 accumulation degrades quality to an unacceptable level when combined with the other optimizations. You might want to do side by side tests with --fast on and off and decide if the speed boost is worth the apparent hit.

Edit
Pub: 06 Mar 2025 17:25 UTC
Edit: 10 Mar 2025 05:32 UTC
Views: 826