/lmg/ recommended models
The number of gigabytes in parentheses is the minimum amount of memory required to run the model at reasonable quant. For smaller models this is widely considered to be at least Q4_K_M. Large models like DeepSeek remain coherent even at Q1. With more memory you'll be able to fit more context. Using a smaller quant will make the model dumber.
Ideally the entire model fits in your VRAM. You can cope by loading parts of the model into RAM instead but it will be much slower.
MoE (mixture of experts) models don't use all of their weights for each token so they are much faster than a dense model of the same size. Because of this loading experts into RAM instead of VRAM is viable. Check the documentation for your preferred backend.
ERP
- Nemo (12GB) - An excellent starting point for vramlets. Uncensored.
- Mistral Small 3.2 (16GB) - A bit smarter and a bit less horny than Nemo.
- GLM-4.5 Air (50GB) - The long awaited middle point between Nemo and DeepSeek. Like Nemo its pretraining doesn't seem to have been filtered at all so it knows all kinds of things. Needs a prefill to get around refusals. Don't go below Q2_K_XL. MoE model.
- GLM-4.5 (150GB) - Same as the above but even more parameters.
- DeepSeek R1 0528 (200GB) - SOTA for local ERP. It's a reasoning MoE model but you can skip reasoning if you're not using the chat template or have prefills in SillyTavern. Even the UD-IQ1_S is extremely capable.
- DeepSeek V3 0324 (200GB) - Non-reasoning DeepSeek MoE model but it has more repetition issues.
Programming & General
- Kimi K2 (300GB) - Current local SOTA non-reasoning MoE model according to benchmarks.
- DeepSeek V3 0324 (200GB) - Previous local SOTA. Smaller than K2. MoE.
- DeepSeek R1 0528 (200GB) - Current local SOTA reasoning model. MoE.
- Qwen3 - Benchmaxxed models with an impressive lack of world knowledge. Good for anything STEM-related though.
- Qwen3 Coder 480B A35B Instruct (200GB) - A non-reasoning MoE model trained specifically for programming and tool calls. Native 256K context. Up to 1M with YaRN. Beats Kimi K2 in programming benchmarks. Close to performance of some reasoning models.
- Qwen3 Coder 30B A3B (24GB) - A smaller MoE coding model with 3B active parameters. Very fast. >100t/s using Q4_K_M on a 4090. Even without a GPU you can get >10 t/s with dual-channel DDR5 RAM.
- Qwen3 235B A22B 2507 (128GB) Thinking / Non-thinking - A big generalist MoE model. Has better world knowledge than the smaller ones.
- Qwen3 32B (24GB) / 14B (12GB) - Generalist dense model.
- Qwen3 0.6B (1GB) - Honorable mention. 0.6B parameters and yet it's capable of writing coherent sentences. If you want to run an LLM on a 10 year old phone this will do. You probably don't want to go smaller than Q8 with this one.
- Gemma 3 27B (24GB) / 12B (12GB) - Supports vision. Better world knowledge than similarly-sized Qwens. Safetyslopped.
- Qwen2.5 Coder 32B (24GB) / 14B (12GB) - Qwen2.5 finetuned specifically for programming. Qwen3 is better at tool calls.