/lmg/ recommended models
The number of gigabytes in parentheses is the minimum amount of memory required to run the model at reasonable quant. For smaller models this is widely considered to be at least Q4_K_M. Large models like DeepSeek remain coherent even at Q1. With more memory you'll be able to fit more context. Using a smaller quant will make the model dumber.
Ideally the entire model fits in your VRAM. You can cope by loading parts of the model into RAM instead but it will be much slower.
MoE (mixture of experts) models don't use all of their weights for each token so they are much faster than a dense model of the same size. Because of this loading experts into RAM instead of VRAM is viable. If you're using llama-server from llama.cpp it will automatically load the model in the most optimal way for your hardware. You should only set the preferred context size because otherwise it defaults to 4096 if you're short on VRAM.
ERP
- Nemo (12GB) - An excellent starting point for vramlets. Uncensored.
- Mistral Small 3.2 (16GB) - A bit smarter and a bit less horny than Nemo.
- GLM-4.5 Air (50GB) - The long awaited middle point between Nemo and DeepSeek. Like Nemo its pretraining doesn't seem to have been filtered at all so it knows all kinds of things. Needs a prefill to get around refusals. Don't go below Q2_K_XL. MoE model.
- GLM-4.6 (150GB) / 4.7 - Same as the above but even more parameters and thus smarter. 4.7 has better benchmark scores but some Anons think that it's more safetyslopped.
- DeepSeek V3 (200GB) / R1 0528 / V3.1 Terminus - Some of the best models available for local ERP right now. R1 is a thinking model and Terminus is a hybrid thinking model. V3 has repetition issues in long chats. R1 is more resistant. Terminus has almost none but it has less variety. Even the smallest quants of DeepSeek like the UD-IQ1_S are very good.
- Kimi K2 (300GB) - DeepSeek architecture but bigger. Similar unfiltered dataset. Some Anons prefer it over DeepSeek.
Programming (including Claude Code, Zed, etc.)
Like most benchmarks, public programming benchmarks have found their way into the training dataset of most models. In my experience, if you work on anything other than webshit, bigger model = better despite the benchmark scores. Test them yourself on your own codebases.
- DeepSeek V3.1 Terminus (200GB) - The latest DeepSeek hybrid reasoning MoE model supported by llama.cpp. V3.2 is also out and if you have the hardware you can run that one instead using vLLM.
- Qwen3 Coder 480B A35B Instruct (200GB) - A non-reasoning MoE model trained specifically for programming and tool calls.
- GLM-4.7 (150GB) - Hybrid reasoning MoE model.
- Devstral 2 2512 (80GB) - Big dense model from Mistral.
- Devstral Small 2505 (24GB) - You can run this one on a single 24GB GPU. According to Mistral this one scores 68.0% on SWE-bench Verified whereas the much bigger Devstral 2 scores 72.2%. That should tell you all you need to know about benchmarks.
- Qwen3 Coder 30B A3B (24GB) - A smaller MoE coding model with 3B active parameters. Very fast. >100t/s using Q4_K_M on a 4090. Even without a GPU you can get >10 t/s with dual-channel DDR5 RAM.
General
- DeepSeek V3.1 Terminus (200GB)
- GLM-4.7 (150GB)
- Gemma 3 27B (24GB) / 12B (12GB) - Supports vision. Better world knowledge than similarly-sized Qwens. Safetyslopped.
- GLM 4.6V - Supports vision. Despite the name this is a much smaller model than GLM 4.6. Like other GLM models, it can into lewd, so this is your go-to model if you want someone to send dick pics to.
- Qwen3 - Benchmaxxed models with an impressive lack of world knowledge. Good for anything STEM-related though.
- Qwen3 235B A22B 2507 (128GB) Thinking / Non-thinking - A big generalist MoE model. Has better world knowledge than the smaller ones.
- Qwen3 32B (24GB) / 14B (12GB) - Generalist dense model.
- Qwen3 0.6B (1GB) - Honorable mention. 0.6B parameters and yet it's capable of writing coherent sentences. If you want to run an LLM on a 10 year old phone this will do. You probably don't want to go smaller than Q8 with this one.