Generative Models (Text-Only)

These models accept text input and produce text output (e.g., chat completions). They are primarily large language models (LLMs), some with mixture-of-experts (MoE) architectures for scaling.

Example launch Command:
1
2
3
python3 -m sglang.launch_server  --model-path meta-llama/Llama-3.2-1B-Instruct \  # example HF/local path
  --host 0.0.0.0 \                                 
  --port 30000 \                                 

Model Family (Variants) Example HuggingFace Identifier Description
Llama (2, 3.x, 4 series) meta-llama/Llama-4-Scout-17B-16E-Instruct Meta’s open LLM series, spanning 7B to 400B parameters (Llama 2, 3, and new Llama 4) with state-of-the-art performance. SGLang provides Llama-4 model-specific optimizations
Mistral (Mixtral, NeMo, Small3) mistralai/Mistral-7B-Instruct-v0.2 Open 7B LLM by Mistral AI with strong performance; extended into MoE (“Mixtral”) and NeMo Megatron variants for larger scale.
Gemma (v1, v2, v3) google/gemma-3-1b-it Google’s family of efficient multilingual models (1B–27B); Gemma 3 offers a 128K context window, and its larger (4B+) variants support vision input.
Qwen (2, 2.5 series, MoE) Qwen/Qwen2.5-14B-Instruct Alibaba’s Qwen model family (7B to 72B); Qwen2.5 series improves multilingual capability and includes base, instruct, MoE, and code-tuned variants.
DeepSeek (v1, v2, v3/R1) deepseek-ai/DeepSeek-R1 Series of advanced reasoning-optimized models (including a 671B MoE) trained with reinforcement learning; top performance on complex reasoning, math, and code tasks. SGLang provides Deepseek v3/R1 model-specific optimizations
OLMoE (Open MoE) allenai/OLMoE-1B-7B-0924 Allen AI’s open Mixture-of-Experts model (7B total, 1B active parameters) delivering state-of-the-art results with sparse expert activation.
StableLM (3B, 7B) stabilityai/stablelm-tuned-alpha-7b StabilityAI’s early open-source LLM (3B & 7B) for general text generation; a demonstration model with basic instruction-following ability.
Command-R (Cohere) CohereForAI/c4ai-command-r-v01 Cohere’s open conversational LLM (Command series) optimized for long context, retrieval-augmented generation, and tool use.
DBRX (Databricks) databricks/dbrx-instruct Databricks’ 132B-parameter MoE model (36B active) trained on 12T tokens; competes with GPT-3.5 quality as a fully open foundation model.
Grok (xAI) xai-org/grok-1 xAI’s grok-1 model known for vast size(314B parameters) and high quality; integrated in SGLang for high-performance inference.
ChatGLM (GLM-130B family) THUDM/chatglm2-6b Zhipu AI’s bilingual chat model (6B) excelling at Chinese-English dialogue; fine-tuned for conversational quality and alignment.
InternLM 2 (7B, 20B) internlm/internlm2-7b Next-gen InternLM (7B and 20B) from SenseTime, offering strong reasoning and ultra-long context support (up to 200K tokens).
ExaONE 3 (Korean-English) LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct LG AI Research’s Korean-English model (7.8B) trained on 8T tokens; provides high-quality bilingual understanding and generation.
Baichuan 2 (7B, 13B) baichuan-inc/Baichuan2-13B-Chat BaichuanAI’s second-generation Chinese-English LLM (7B/13B) with improved performance and an open commercial license.
MiniCPM (v3, 4B) openbmb/MiniCPM3-4B OpenBMB’s series of compact LLMs for edge devices; MiniCPM 3 (4B) achieves GPT-3.5-level results in text tasks.
XVERSE (MoE) xverse/XVERSE-MoE-A36B Yuanxiang’s open MoE LLM (XVERSE-MoE-A36B: 255B total, 36B active) supporting ~40 languages; delivers 100B+ dense-level performance via expert routing.
SmolLM (135M–1.7B) HuggingFaceTB/SmolLM-1.7B Hugging Face’s ultra-small LLM series (135M–1.7B params) offering surprisingly strong results, enabling advanced AI on mobile/edge devices.
GLM-4 (Multilingual 9B) ZhipuAI/glm-4-9b-chat Zhipu’s GLM-4 series (up to 9B parameters) – open multilingual models with support for 1M-token context and even a 5.6B multimodal variant (Phi-4V).
Phi (Phi-3, Phi-4 series) microsoft/Phi-4-multimodal-instruct Microsoft’s Phi family of small models (1.3B–5.6B); Phi-4-mini is a high-accuracy text model and Phi-4-multimodal (5.6B) processes text, images, and speech in one compact model.
Phi (Phi-3, Phi-4 series) microsoft/Phi-4-multimodal-instruct Microsoft’s Phi family of small models (1.3B–5.6B); Phi-4-mini is a high-accuracy text model and Phi-4-multimodal (5.6B) processes text, images, and speech in one compact model.

Vision-Language Models (VLMs)

These models accept multi-modal inputs (e.g., images and text) and generate text output. They augment language models with visual encoders and require a specific chat template for handling vision prompts.

We need to specify `--chat-template` for VLMs because the chat template provided in HuggingFace tokenizer only supports text. If you do not specify a vision models `--chat-template`, the server uses HuggingFaces default template, which only supports text and the images wont be passed in.
Example launch Command:
1
2
3
4
python3 -m sglang.launch_server  --model-path meta-llama/Llama-3.2-11B-Vision-Instruct \  # example HF/local path
  --chat-template llama_3_vision \                        # required chat template
  --host 0.0.0.0 \                                 
  --port 30000 \                                 

Model Family (Variants) Example HuggingFace Identifier Chat Template Description
Llama 3.2 Vision (11B) meta-llama/Llama-3.2-11B-Vision-Instruct llama_3_vision Vision-enabled variant of Llama 3 (11B) that accepts image inputs for visual question answering and other multimodal tasks.
LLaVA (v1.5 & v1.6) e.g. liuhaotian/llava-v1.5-13b vicuna_v1.1 Open vision-chat models that add an image encoder to LLaMA/Vicuna (e.g. LLaMA2 13B) for following multimodal instruction prompts.
LLaVA-NeXT (8B, 72B) lmms-lab/llava-next-72b chatml-llava Improved LLaVA models (with an 8B Llama3 version and a 72B version) offering enhanced visual instruction-following and accuracy on multimodal benchmarks.
LLaVA-OneVision lmms-lab/llava-onevision-qwen2-7b-ov chatml-llava Enhanced LLaVA variant integrating Qwen as the backbone; supports multiple images (and even video frames) as inputs via an OpenAI Vision API-compatible format.
Qwen-VL (Qwen2 series) Qwen/Qwen2.5-VL-7B-Instruct qwen2-vl Alibaba’s vision-language extension of Qwen; for example, Qwen2.5-VL (7B and larger variants) can analyze and converse about image content.
Gemma 3 (Multimodal) google/gemma-3-4b-it gemma-it Gemma 3’s larger models (4B, 12B, 27B) accept images (each image encoded as 256 tokens) alongside text in a combined 128K-token context.
MiniCPM-V / MiniCPM-o openbmb/MiniCPM-V-2_6 minicpmv MiniCPM-V (2.6, ~8B) supports image inputs, and MiniCPM-o adds audio/video; these multimodal LLMs are optimized for end-side deployment on mobile/edge devices.
DeepSeek-VL2 deepseek-ai/deepseek-vl2 deepseek-vl2 Vision-language variant of DeepSeek (with a dedicated image processor), enabling advanced multimodal reasoning on image and text inputs.
Janus-Pro (1B, 7B) deepseek-ai/Janus-Pro-7B janus-pro DeepSeek’s open-source multimodal model capable of both image understanding and generation. Janus-Pro employs a decoupled architecture for separate visual encoding paths, enhancing performance in both tasks.

Embedding Models

SGLang provides robust support for embedding models by integrating efficient serving mechanisms with its flexible programming interface. This integration allows for streamlined handling of embedding tasks, facilitating faster and more accurate retrieval and semantic search operations. SGLang's architecture enables better resource utilization and reduced latency in embedding model deployment.

They are executed with `--is-embedding` and some may require `--trust-remote-code` and/or `--chat-template`
Example launch Command:
1
2
3
4
python3 -m sglang.launch_server  --model-path Alibaba-NLP/gme-Qwen2-VL-2B-Instruct \  # example HF/local path
  --is-embedding  --host 0.0.0.0 \       
  --chat-template gme-qwen2-vl \                     # set chat template
  --port 30000 \                                 

Model Family (Embedding) Example HuggingFace Identifier Chat Template Description
Llama/Mistral based (E5EmbeddingModel) intfloat/e5-mistral-7b-instruct N/A Mistral/Llama-based embedding model fine‑tuned for high‑quality text embeddings (top‑ranked on the MTEB benchmark).
GTE (QwenEmbeddingModel) Alibaba-NLP/gte-Qwen2-7B-instruct N/A Alibaba’s general text embedding model (7B), achieving state‑of‑the‑art multilingual performance in English and Chinese.
GME (MultimodalEmbedModel) Alibaba-NLP/gme-Qwen2-VL-2B-Instruct gme-qwen2-vl Multimodal embedding model (2B) based on Qwen2‑VL, encoding image + text into a unified vector space for cross‑modal retrieval.
CLIP (CLIPEmbeddingModel) openai/clip-vit-large-patch14-336 N/A OpenAI’s CLIP model (ViT‑L/14) for embedding images (and text) into a joint latent space; widely used for image similarity search.

Reward Models

These models output a scalar reward score or classification result, often used in reinforcement learning or content moderation tasks.

They are executed with `--is-embedding` and some may require `--trust-remote-code`.
Example launch Command:
1
2
3
4
python3 -m sglang.launch_server  --model-path Qwen/Qwen2.5-Math-RM-72B \  # example HF/local path
  --is-embedding  --host 0.0.0.0 \       
  --tp-size=4 \                          # set for tensor parallelism
  --port 30000 \                                 

Model Family (Reward) Example HuggingFace Identifier Description
Llama (3.1 Reward / LlamaForSequenceClassification) Skywork/Skywork-Reward-Llama-3.1-8B-v0.2 Reward model (preference classifier) based on Llama 3.1 (8B) for scoring and ranking responses for RLHF.
Gemma 2 (27B Reward / Gemma2ForSequenceClassification) Skywork/Skywork-Reward-Gemma-2-27B-v0.2 Derived from Gemma‑2 (27B), this model provides human preference scoring for RLHF and multilingual tasks.
InternLM 2 (Reward / InternLM2ForRewardMode) internlm/internlm2-7b-reward InternLM 2 (7B)–based reward model used in alignment pipelines to guide outputs toward preferred behavior.
Qwen2.5 (Reward - Math / Qwen2ForRewardModel) Qwen/Qwen2.5-Math-RM-72B A 72B math-specialized RLHF reward model from the Qwen2.5 series, tuned for evaluating and refining responses.
Qwen2.5 (Reward - Sequence / Qwen2ForSequenceClassification) jason9693/Qwen2.5-1.5B-apeach A smaller Qwen2.5 variant used for sequence classification, offering an alternative RLHF scoring mechanism.
Edit
Pub: 11 Apr 2025 09:08 UTC
Views: 24