Chroma Inference/Generation

Prerequisites

24 GB Cards

  1. Get Chroma HD Flash and put it in diffusion_models in ComfyUI's models folder.
  2. Get a text encoder, then place it in ComfyUI's text encoder's folder
    a. T5 FP16 (for >=32GB RAM)
    b. T5 FP8 (for <32GB RAM)
  3. Only required for the Upscale/Face detailer workflow Upscaler model 4x_NMKD-Siax_200k.pth
  4. Only required for the upscale/face detailer workflow [face_yolov8n-seg2_60.pthttps://huggingface.co/jags/yolov8_model_segmentation-set/resolve/main/face_yolov8n-seg2_60.pt)

12 GB Cards

Coming Soon™

Chroma Workflows

Both of these load the full models so 24GB is required. You can modify these to load GGUF quantizations if need be. I'll post a few lower memory workflows soon.

Basic Chroma Workflow

Check it out here

Chroma HD Flash with Upscale and Facedetailer

Check it out here

Chroma Training

Training Environment Setup

This is assuming a cloud GPU setup like runpod.io or vast.ai. You may have to make modifications for running locally.

# Prior to running the below commands, ensure Python 3.10 or greater is installed
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
git checkout sd3
git pull
python -m venv venv
source ./venv/scripts/activate
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements
pip install prodigy-plus-schedule-free==2.0.0rc2
# get a huggingface API token: log into huggingface, go to the top right profile icon, go to access tokens, create a new one, and paste it here
export HF_TOKEN=<your token>
wget --header="Authorization: Bearer $HF_TOKEN" https://huggingface.co/lodestones/Chroma1-HD/resolve/main/Chroma1-HD.safetensors
wget --header="Authorization: Bearer $HF_TOKEN" https://huggingface.co/UmeAiRT/ComfyUI-Auto_installer/resolve/df511f9f086b2f12e3a81471831ccb23969d8461/t5xxl_fp16.safetensors`
wget --header="Authorization: Bearer $HF_TOKEN" https://huggingface.co/UmeAiRT/ComfyUI-Auto_installer/resolve/df511f9f086b2f12e3a81471831ccb23969d8461/ae.safetensors
Place the lora_config.toml and train.sh into your desired directory.  Ensure paths within both of these files are correct.

Data Collection and Training Settings

As of now, I just follow the same data collection techniques from the SDXL world:

  1. Collect 20-70 images of the target person.
  2. Caption them with natural language and put your trigger tag at the front. For example: ava max, an iphone selfie photograph of a woman with blonde hair...
  3. Grab the following TOML and training command for kohya.

lora_config.toml
train.sh

  1. update lora_config.toml to point to the folders of your images. Just use the full path to the image folder. For example /workspace/img/1_avamax
  2. Do a find/replace on the string person-name and replace with whatever you want. For example: ava-max.
  3. run train.sh

Available LoRAs

Please bear in mind that these are going to be slightly larger in filesize than SDXL. I've experiemented with smaller dimensions but the likeness never looks as good.

Name Date Added Download
aoc-chromahd.safetensors (trigger with her full hyphenated name) 2025-09-19 Download
abby-shapiro-chromahd.safetensors 2025-09-19 Download
ariana-grande-chromahd.safetensors 2025-09-19 Download
belle-delphine-chromahd.safetensors 2025-09-19 Download
emma-watson-chromahd.safetensors 2025-09-19 Download
hayley-williams-chromahd.saftetensors 2025-09-19 Download
lady-gaga-chromahd.safetensors 2025-09-19 Download
olivia-rodrigo-chromahd.safetensors 2025-09-19 Download
taylor-swift-chromahd.safetensors 2025-09-19 Download
Edit

Pub: 01 Sep 2025 16:32 UTC

Edit: 19 Sep 2025 17:24 UTC

Views: 878