GLM4.5 Air Lora Finetuning Guide

Work in Progress.

I am one person, without an extensive ML or software engineering background. This guide is the ramblings of one dude on the internet and is intended as a proof of concept for using MS-Swift / Megatron to train an MoE model, GLM4.5-Air using a single node at reasonable speeds with lora (4*H200 is my personal setup).

Additionally, as a proof of concept there are still multiple steps to the process that could be improved from a cost-efficiency perspective, although it should be significantly more performant that HuggingFace, axolotl etc.

This is not a beginner guide to finetuning.

This will assume you're already experienced in one of the training frameworks and have the technical knowledge required to configure an environment and hyperparams etc. MoE models are not a good place to learn finetuning, nor would I recommend MS-Swift as someone's first training wrapper. Unsloth, Axolotl with dense models have much more information available and ease of getting started.

Getting Started

For this guide we're using MS-Swift as our trainer. This is because it supports Megatron, which is significantly more performant (Both in terms of vram used and time taken) for training MoE models than HuggingFace format.

The documentation for the trainer looks, frankly, terrifying. But, once you get your head around the initial setup, it honestly isn't too bad.

This guide works with RunPod PyTorch 2.4 template with Python 3.11. If you're working from something else, you'll need to go through a small amount of dependency hell as there are some dependencies my script below hasn't accounted for, as the RunPod template comes with them by default.

Setup Script

This will install MS-Swift, Megatron and dependencies. This script was made for the PyTorch 2.4 Template on RunPod. Make modifications as necessary for your own environment. Further information can be found here: https://github.com/modelscope/ms-swift/blob/main/docs/source_en/Megatron-SWIFT/Quick-start.md

#!/bin/bash
set -e  # Exit on any error

echo "Starting environment setup..."

# Update system and install dependencies
echo "Installing system dependencies..."
apt update && apt upgrade -y
apt-get install libcudnn8-dev -y

# Install PyTorch for CUDA 12.4
echo "Installing PyTorch..."
pip install torch==2.6.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 --force-reinstall

# Clone and install ms-swift
echo "Installing ms-swift..."
cd /workspace
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .

# Install build dependencies
echo "Installing build dependencies..."
pip install pybind11 ninja wandb

# Install transformer_engine
echo "Installing transformer_engine..."
pip install --no-build-isolation transformer_engine[pytorch]

# Install apex
echo "Installing apex..."
cd /workspace/ms-swift
git clone https://github.com/NVIDIA/apex
cd apex
git checkout e13873debc4699d39c6861074b9a3b2a02327f92
export MAX_JOBS=$(nproc)
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation    --config-settings "--build-option=--cpp_ext"    --config-settings "--build-option=--cuda_ext" ./

# Install megatron-core
echo "Installing megatron-core..."
pip install git+https://github.com/NVIDIA/Megatron-LM.git@core_r0.13.0

# Clone Megatron-LM
echo "Cloning Megatron-LM..."
cd /workspace/ms-swift
git clone --branch core_r0.13.0 https://github.com/NVIDIA/Megatron-LM.git
export MEGATRON_LM_PATH='/workspace/ms-swift/Megatron-LM'
echo "export MEGATRON_LM_PATH='/workspace/ms-swift/Megatron-LM'" >> ~/.bashrc

# Install flash-attn
echo "Installing flash-attn..."
pip install flash-attn==2.7.4.post1 --no-build-isolation

echo "Environment setup complete!"

Converting the model to the Megatron format

Next is converting your model from HuggingFace format to Megatron format. There's probably a way to do this on CPU / RAM, but the config option in MS-Swift eludes me. Depending on the size of the model, you might need a lot of VRAM to convert (GLM-Air took 2*H200 from memory).

1
2
3
4
5
6
7
CUDA_VISIBLE_DEVICES=0,1
USE_HF=True
swift export
    --model zai-org/GLM-4.5-Air
    --to_mcore true
    --torch_dtype bfloat16
    --output_dir GLM-4.5-Air-mcore

Once the model is converted, personally I upload this to HuggingFace so I can download it later to re-use if I want to run the training, while the environment configures itself.

Training the model

Rather than walk you through all the config options, MS-Swift has good documentation for this.

https://github.com/modelscope/ms-swift/blob/main/docs/source_en/Megatron-SWIFT/Command-line-parameters.md

An example rslora config is below. Obviously change cuda devices, wandb api key etc. as necessary. This config trains attention and the shared expert MLP layers. Hyperparams YMMV, I'm still searching for something that feels decent. The below config uses about 90-100GB on 4*H200.

PYTORCH_CUDA_ALLOC_CONF='expandable_segments:True'
NPROC_PER_NODE=4
WANDB_API_KEY=API_KEY_HERE
CUDA_VISIBLE_DEVICES=0,1,2,3
megatron sft
    --load '/workspace/glm-4.5-air-mcore'
    --dataset '/workspace/dataset_shuffled_nothink.jsonl'
    --load_from_cache_file true
    --train_type lora
    --lora_rank 128
    --lora_alpha 16
    --target_modules linear_qkv linear_proj shared_experts.linear_fc1 shared_experts.linear_fc2
    --split_dataset_ratio 0.015
    --moe_permute_fusion true
    --tensor_model_parallel_size 4
    --expert_tensor_parallel_size 1
    --expert_model_parallel_size 4
    --moe_grouped_gemm true
    --moe_shared_expert_overlap true
    --moe_aux_loss_coeff 1e-3
    --micro_batch_size 2
    --global_batch_size 32
    --recompute_granularity full
    --recompute_method uniform
    --recompute_num_layers 1
    --max_epochs 2
    --cross_entropy_loss_fusion true
    --lr 1e-5
    --use_rslora true
    --lr_warmup_fraction 0.05
    --min_lr 1e-6
    --save megatron_output/Iceblink-v2-SFT-4-shared-r128-a16-rslora
    --eval_interval 20
    --save_interval 25
    --finetune true
    --packing true
    --max_length 10280
    --num_workers 8
    --dataset_num_proc 8
    --no_save_optim true
    --no_save_rng true
    --sequence_parallel true
    --wandb_project Megatron-Air-SFT
    --wandb_exp_name Megatron-Air-SFT-5-shared-r128-a16-rslora
    --attention_backend flash

Merging the adapter and converting back to HuggingFace format

Once you've trained your model, you can use the below command to both merge the Lora adapter into the model & convert it back into HuggingFace format. Replace the mcore_adapters / outdir as needed.

1
2
3
4
5
6
7
CUDA_VISIBLE_DEVICES=0,1
USE_HF=True
swift export
    --mcore_adapters megatron_output/Iceblink-v2-SFT-4-shared-r128-a16-rslora/v0-xxx-xxx
    --to_hf true
    --torch_dtype bfloat16
    --output_dir iceblink-adapter/Iceblink-v2-SFT-4-shared-r128-a16-rslora-hf

Re-adding the MTP layer.

Once you convert the model back to HuggingFace format, you'll notice an error saying layer 46 isn't being used. This is the MTP layer for GLM4.5 Air. If you want to convert the model to gguf, you'll need to re-add this from the original model. Below is a script to do this. This requires no VRAM so if you're renting hardware, do this on a lower spec machine.

import torch
import json
import glob
import os
from safetensors import safe_open
from safetensors.torch import save_file
import shutil

def attach_mtp_layer(base_model_path, merged_model_path, output_path):
    print("Extracting layer 46 from base model...")
    layer_46_weights = {}
    safetensor_files = glob.glob(f"{base_model_path}/*.safetensors")

    for file in safetensor_files:
        with safe_open(file, framework="pt") as f:
            for key in f.keys():
                if "model.layers.46." in key:
                    layer_46_weights[key] = f.get_tensor(key)

    print(f"Found {len(layer_46_weights)} layer 46 tensors")

    # Copy merged model to output
    print(f"Copying merged model to {output_path}...")
    shutil.copytree(merged_model_path, output_path, dirs_exist_ok=True)

    # Load and update the index
    index_path = f"{output_path}/model.safetensors.index.json"
    with open(index_path, 'r') as f:
        index = json.load(f)

    # Find the last shard file
    weight_map = index["weight_map"]
    last_shard = sorted(set(weight_map.values()))[-1]
    shard_path = f"{output_path}/{last_shard}"

    print(f"Adding layer 46 to {last_shard}...")

    # Load existing weights from last shard
    existing_weights = {}
    with safe_open(shard_path, framework="pt") as f:
        for key in f.keys():
            existing_weights[key] = f.get_tensor(key)

    # Add layer 46 weights
    existing_weights.update(layer_46_weights)

    # Update weight_map in index
    for key in layer_46_weights:
        weight_map[key] = last_shard

    # Save updated shard
    save_file(existing_weights, shard_path, metadata={"format": "pt"})

    # Update and save index
    with open(index_path, 'w') as f:
        json.dump(index, f, indent=2)

    # Update config to reflect MTP layer
    config_path = f"{output_path}/config.json"
    with open(config_path, 'r') as f:
        config = json.load(f)

    config['num_nextn_predict_layers'] = 1

    with open(config_path, 'w') as f:
        json.dump(config, f, indent=2)

    print("✅ MTP layer 46 successfully attached!")

if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--base", required=True, help="Base model with layer 46")
    parser.add_argument("--merged", required=True, help="Merged model missing layer 46")
    parser.add_argument("--out", required=True, help="Output path for fixed model")

    args = parser.parse_args()
    attach_mtp_layer(args.base, args.merged, args.out)

Call the script like so:

python fix_glm_mtp.py --base "zai-org/GLM-4.5-Air" --merged "your_model" --out "output_dir"

Once converted, you now have a finetuned GLM4.5-Air model ready to be uploaded, tested / quanted etc.

Edit

Pub: 10 Oct 2025 07:49 UTC

Edit: 14 Oct 2025 18:37 UTC

Views: 96