Ayumi's LLM Role Play & ERP Ranking Archive 2 (Results from 2023-10-04)

This Ranking employs currently three important metrics to rank the models: The ALC-IQ (Ayumi LLM Character IQ), the ERP Score and the ERP Variety Score. Keep in mind though: this is just an automated benchmark employing rather primitive metrics, and it can't cover rating the quality of the generated output. It can only cover how seemingly well a Large Language Model (LLM) can understand character cards (see the ALC-IQ) and secondly if it can be used in to generate lewd responses (see the ERP Score and ERP Variance Score). The ERP benchmark is currently only based on a single character ('Ayumi') and a single fixed erotic setting, which is eventually subject to change. A few details about the testing procedure can be found further down.

The benchmark for the ALC-IQ works by letting the character answer how much they agree with a statement about their personality in a role playing chat log prompt. The character has to answer by writing a number between 1 and 5 (1 - disagree, 2 - slightly disagree, 3 - neutral, 4 - slightly agree, 5 - agree) to a statement they were presented with. The result will then be compared with the expected answer and the deviation from that is recorded. For more details refer to the section Ayumi LLM Character IQ - ALC-IQ.

The ERP Score is the average ratio of lewd words the model generates in a response, which is limited to 100 tokens. For more details refer to the section ERP Score

The third and rather new metric is the ERP Variety Score, this score measures the range of lewd words the model showed to generate in the responses for the ERP Score. This means, the models not only need to generate responses with many lewd words, but also with many different lewd words.

Emoji Key

ALC-IQ Emoji Meaning ERP Emoji Meaning
⭐🧠 Best of High ALC-IQ Class, shows excellent understanding of the character cards in a role play chat. 🌢🌢 Very spicy model, capable of generating lots of lewd words
🧠 High ALC-IQ Class, shows excellent understanding of character cards in a role play chat. 🌢 Spicy model, capable of generating many lewd words
β­πŸ“– Best of Good ALC-IQ Class, shows good understanding of character cards in a role play chat. πŸ‘Œ Likely not censored model, but generates probably short answers or fewer lewd words
πŸ“– Good ALC-IQ Class, still gets details of the character cards in a role play chat. 🧊 Very possibly censored/SFW aligned model
β­πŸ€” Best of Lower ALC-IQ Class, has it's challenges with details of the character card in a role play chat.
πŸ€” Lower ALC-IQ Class, certainly challenged with the character card in a role play chat. ❄ The ERP word variety of this model is great, it shows creative variety of lewd word usage.
⭐πŸ€ͺ Best of Dumb ALC-IQ Class, very very challenged to get the character card in a role play chat. ✳ This model still shows knowledge of various lewd words, but there are better ones.
πŸ€ͺ Dumb ALC-IQ Class, seems to be completely confused or has other issues getting the character card in a role play chat. β™» This model has limitations in knowledge and usage of lewd words. It likely repeatedly uses the same words across regenerations.
Rank Symbol Meaning
πŸ₯‡ πŸ₯ˆ πŸ₯‰ These medals are assigned broadly to the top ranked models. This is partially to give an impression how well these might work for you and partially also signal that there is no single definitive best model.
πŸŽ“ Top ALC-IQ ranks get this one.
πŸ† Top ERP ranks get this one.

3B-7B Models

2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html

Rank ALC-IQ Rank ERP Rank ALC-IQ ERP Score ERP Var Score Model
πŸ₯‡ 1 πŸŽ“ 1 πŸ† 3 ⭐🧠 95.33 🌢🌢 30.25 ❄ 131 πŸ₯‡πŸŽ“πŸ† Mistral Claude Chat 7B Q5_K_M
πŸ₯‡ 2 πŸŽ“ 4 πŸ† 13 ⭐🧠 91.88 🌢 21.54 ❄ 152 πŸ₯‡πŸŽ“πŸ† Mistral ClaudeLimaRP v3 7B Q5_K_M
πŸ₯‡ 3 πŸŽ“ 5 πŸ† 17 ⭐🧠 90.73 🌢🌢 24.09 ❄ 130 πŸ₯‡πŸŽ“πŸ† Mistral RP 0.1 7B Q5_K_M
πŸ₯‡ 4 πŸŽ“ 6 πŸ† 16 ⭐🧠 87.04 🌢🌢 25.56 ❄ 128 πŸ₯‡πŸŽ“πŸ† Synthia v1.3 7B Q5_K_M
πŸ₯‡ 5 πŸŽ“ 11 πŸ† 12 ⭐🧠 84.33 🌢🌢 24.13 ❄ 134 πŸ₯‡πŸŽ“πŸ† Samantha Mistral 7B Q5_K_M
πŸ₯‡ 6 πŸŽ“ 2 πŸ† 27 ⭐🧠 92.68 🌢 22.40 ❄ 130 πŸ₯‡πŸŽ“πŸ† Mistral v0.1 7B Q5_K_M
πŸ₯‡ 7 πŸŽ“ 9 πŸ† 21 ⭐🧠 84.79 🌢🌢 27.76 ✳ 120 πŸ₯‡πŸŽ“πŸ† Kuchiki 7B Q5_K_M
πŸ₯‡ 8 πŸŽ“ 7 πŸ† 26 ⭐🧠 86.75 🌢🌢 23.93 ❄ 128 πŸ₯‡πŸŽ“πŸ† PetrolLM 7B Q5_K_M
πŸ₯‡ 9 29 πŸ† 11 ⭐🧠 81.51 🌢🌢 25.50 ❄ 132 πŸ₯‡πŸ† Zaraxls 7B Q5_K_M
πŸ₯‡ 10 πŸŽ“ 14 πŸ† 31 ⭐🧠 83.47 🌢 19.64 ❄ 134 πŸ₯‡πŸŽ“πŸ† Zarafusionex 1.2 7B Q5_K_M
πŸ₯‡ 11 πŸŽ“ 13 36 ⭐🧠 83.53 🌢 22.97 ❄ 122 πŸ₯‡πŸŽ“ Hermes Limarp 7B Q5_K_M
πŸ₯‡ 12 πŸŽ“ 17 35 ⭐🧠 82.95 🌢🌢 27.63 ✳ 114 πŸ₯‡πŸŽ“ Zarablend 7B Q5_K_M
πŸ₯‡ 13 46 πŸ† 5 πŸ“– 79.55 🌢🌢 24.99 ❄ 149 πŸ₯‡πŸ† MistRP v1.1 7B Q8_0
πŸ₯‡ 14 33 πŸ† 22 🧠 81.22 🌢 21.21 ❄ 141 πŸ₯‡πŸ† Zarafusionex 7B Q5_K_M
πŸ₯‡ 15 πŸŽ“ 10 50 ⭐🧠 84.39 🌢🌢 30.30 102 πŸ₯‡πŸŽ“ Zarablend 1.1 7B Q5_K_M
πŸ₯‡ 16 32 πŸ† 24 🧠 81.28 🌢 22.55 ❄ 134 πŸ₯‡πŸ† Zarablendex VQ 7B (link broken) Q5_K_M
πŸ₯‡ 17 πŸŽ“ 3 62 ⭐🧠 92.45 🌢 19.75 ✳ 117 πŸ₯‡πŸŽ“ Kimiko Mistral 7B Q5_K_M
πŸ₯ˆ 18 52 πŸ† 4 πŸ“– 78.80 🌢🌢 26.43 ❄ 140 πŸ₯ˆπŸ† Mistral Instruct v0.1 7B Q5_K_M
πŸ₯ˆ 19 19 46 ⭐🧠 82.43 🌢 19.14 ❄ 130 πŸ₯ˆ Zarafusionex 1.1 7B Q5_K_M
πŸ₯ˆ 20 πŸŽ“ 8 67 ⭐🧠 84.91 🌢 21.01 ✳ 113 πŸ₯ˆπŸŽ“ Hermes LimaRP 7B Q5_K_M
πŸ₯ˆ 21 20 54 ⭐🧠 82.37 🌢 20.15 ✳ 120 πŸ₯ˆ Zarafusionix 7B Q5_K_M
πŸ₯ˆ 22 24 56 ⭐🧠 81.91 🌢 19.25 ✳ 120 πŸ₯ˆ Krakowiak 7B Q4_K_M
πŸ₯ˆ 23 28 55 ⭐🧠 81.74 🌢🌢 26.60 104 πŸ₯ˆ Zarablend M 7B Q5_K_M
πŸ₯ˆ 24 34 48 🧠 80.82 🌢 20.38 ❄ 122 πŸ₯ˆ Vigogne 2 7B Q5_K_M
πŸ₯ˆ 25 22 65 ⭐🧠 82.20 🌢🌢 26.39 101 πŸ₯ˆ Kuchiki 1.1 7B Q5_K_M
πŸ₯ˆ 26 44 41 🧠 79.72 🌢🌢 27.21 ✳ 111 πŸ₯ˆ Zarablend MX 7B Q5_K_M
πŸ₯ˆ 27 59 πŸ† 29 πŸ“– 77.36 🌢 20.83 ❄ 136 πŸ₯ˆπŸ† Zaramix 7B Q5_K_M
πŸ₯ˆ 28 63 πŸ† 25 πŸ“– 77.07 🌢 22.57 ❄ 133 πŸ₯ˆπŸ† LLaMA-2 Guanaco 7B Q5_1
πŸ₯ˆ 29 55 37 πŸ“– 78.23 🌢 21.98 ❄ 123 πŸ₯ˆ AstraMix 7B Q5_K_M
πŸ₯ˆ 30 πŸŽ“ 12 93 ⭐🧠 83.64 🧊 13.36 ✳ 120 πŸ₯ˆπŸŽ“ LLaMA 2 Monika V0.3B 7B Q5_1
πŸ₯ˆ 31 41 60 🧠 79.84 🌢 22.22 ✳ 112 πŸ₯ˆ Medusa 1.1 7B Q5_K_M
πŸ₯ˆ 32 40 63 🧠 80.13 🌢 22.31 ✳ 112 πŸ₯ˆ Hermes Kimiko 7B Q5_K_M
πŸ₯ˆ 33 45 66 πŸ“– 79.67 🌢 19.01 ✳ 119 πŸ₯ˆ Typly Pigeon 7B Q4_K_M
πŸ₯ˆ 34 πŸŽ“ 15 107 ⭐🧠 83.12 πŸ‘Œ 15.63 ✳ 109 πŸ₯ˆπŸŽ“ LLaMA-2 7B Q8_0
πŸ₯ˆ 35 37 83 🧠 80.24 🌢🌢 23.30 100 πŸ₯ˆ Zaraxe 7B Q5_K_M
πŸ₯ˆ 36 48 70 πŸ“– 79.32 🌢 19.22 ✳ 118 πŸ₯ˆ Nous Hermes 7B Q5_K_M
πŸ₯ˆ 37 25 98 ⭐🧠 81.80 πŸ‘Œ 16.05 ✳ 113 πŸ₯ˆ Dugong 7B Q5_1
πŸ₯ˆ 38 68 49 πŸ“– 75.63 🌢 19.27 ❄ 123 πŸ₯ˆ LLaMA-2 Coder 7B Q5_K_M
πŸ₯ˆ 39 94 πŸ† 18 πŸ€” 68.78 🌢🌢 25.34 ❄ 128 πŸ₯ˆπŸ† Hermesboros Limarp 7B Q5_K_M
πŸ₯ˆ 40 99 πŸ† 14 πŸ€” 67.40 🌢 22.80 ❄ 142 πŸ₯ˆπŸ† Vicuna 1.3 7B Q8_0
πŸ₯ˆ 41 102 πŸ† 15 πŸ€” 66.30 🌢🌢 26.00 ❄ 128 πŸ₯ˆπŸ† Airoboros GPT4 1.4.1 7B Q5_K_M
πŸ₯ˆ 42 109 πŸ† 7 πŸ€” 63.59 🌢🌢 25.59 ❄ 137 πŸ₯ˆπŸ† Samantha Mistral Instruct 7B Q5_K_M
πŸ₯‰ 43 54 73 πŸ“– 78.63 🌢 19.61 ✳ 114 πŸ₯‰ LosslessMegaCoder Mini 7B Q5_K_M
πŸ₯‰ 44 110 πŸ† 6 πŸ€” 63.48 🌢🌢 28.11 ❄ 131 πŸ₯‰πŸ† Airoboros GPT4 1.2 7B Q4_K_M
πŸ₯‰ 45 67 58 πŸ“– 75.69 🌢 21.05 ✳ 116 πŸ₯‰ Airoboros 2.1 7B Q5_K_M
πŸ₯‰ 46 30 104 ⭐🧠 81.51 πŸ‘Œ 16.64 ✳ 109 πŸ₯‰ LLaMA 2 7B Q5_1
πŸ₯‰ 47 31 103 🧠 81.34 🧊 11.74 ✳ 118 πŸ₯‰ Tsukasa Limarp 7B Q5_K_M
πŸ₯‰ 48 82 42 πŸ“– 72.24 🌢 20.90 ❄ 126 πŸ₯‰ Orca Mini v3 7B Q5_K_M
πŸ₯‰ 49 91 πŸ† 32 πŸ€” 69.30 🌢🌢 23.61 ❄ 122 πŸ₯‰πŸ† Wizard Vicuna Uncensored 7B Q5_K_M
πŸ₯‰ 50 80 53 πŸ“– 72.58 🌢 21.15 ✳ 118 πŸ₯‰ Marcoroni 7B Q5_K_M
πŸ₯‰ 51 18 129 ⭐🧠 82.78 πŸ‘Œ 13.67 99 πŸ₯‰ LLaMA-2 PeanutButter v19 R8 7B Q5_K_M
πŸ₯‰ 52 103 πŸ† 28 πŸ€” 65.73 🌢🌢 28.20 ✳ 117 πŸ₯‰πŸ† Frank Uncensored 7B Q5_K_M
πŸ₯‰ 53 111 πŸ† 20 πŸ€” 63.31 🌢 20.40 ❄ 152 πŸ₯‰πŸ† OpenBuddy OpenLLaMA v5 7B Q3_K
πŸ₯‰ 54 60 82 πŸ“– 77.25 🌢 21.40 106 πŸ₯‰ Airoboros 2.2 7B Q5_K_M
πŸ₯‰ 55 49 96 πŸ“– 79.15 πŸ‘Œ 17.32 ✳ 109 πŸ₯‰ LlongOrca 16K 7B Q5_K_M (ext. context maybe broken)
πŸ₯‰ 56 121 πŸ† 10 πŸ€” 61.52 🌢🌢 30.29 ❄ 127 πŸ₯‰πŸ† Airoboros GPT4 7B Q4_K_M
πŸ₯‰ 57 23 128 ⭐🧠 82.14 πŸ‘Œ 15.95 93 πŸ₯‰ Befenghuang Vigogne 2 Chat 7B Q5_K_S
πŸ₯‰ 58 105 πŸ† 30 πŸ€” 65.26 🌢🌢 25.24 ✳ 120 πŸ₯‰πŸ† Airoboros GPT4 1.4.1 Limarp 7B Q5_K_M
πŸ₯‰ 59 81 59 πŸ“– 72.52 🌢🌢 23.94 ✳ 109 πŸ₯‰ Spicyboros 2.2 7B Q5_K_M
60 88 51 πŸ€” 71.77 🌢 19.62 ✳ 121 Ganchengguang Yoko Japanse v0 7B Q5_K_S
61 42 109 🧠 79.72 🌢 18.09 96 Airoboros L2 2.2.1 7B Q5_K_M
62 118 πŸ† 19 πŸ€” 62.21 🌢 23.23 ❄ 132 πŸ† Guanaco 7B Q5_K_M
63 101 40 πŸ€” 66.71 🌢🌢 23.73 ✳ 118 WizardLM V1.0 Uncensored 7B Q5_K_M
64 26 130 ⭐🧠 81.80 πŸ‘Œ 14.08 96 Jindo Instruct Pre-Alpha 7B Q5_K_M
65 87 57 πŸ“– 71.83 πŸ‘Œ 16.87 ❄ 130 LLongMA-2 Storysummarizer 7B Q5_K_M (ext. context maybe broken)
66 43 117 🧠 79.72 πŸ‘Œ 17.65 95 Saiga 2 7B Q5_K
67 93 61 πŸ€” 68.95 🌢🌢 28.27 95 Xwin LM V0.1 7B Q5_K_M
68 69 90 πŸ“– 75.58 🌢 18.86 ✳ 109 MythoChizuru Mini 7B Q4_K_M
69 117 πŸ† 33 πŸ€” 62.38 🌢🌢 28.04 ✳ 115 πŸ† Airoboros GPT4 1.3 7B Q4_K_M
70 83 74 πŸ“– 72.06 🌢 20.98 ✳ 111 Saiga 7B Q5_1
71 πŸŽ“ 16 155 ⭐🧠 83.06 🧊 5.05 β™» 73 πŸŽ“ MedLLama 7B Q5_K_M
72 89 68 πŸ€” 71.66 🌢🌢 24.71 104 Luna AI LLaMA-2 Uncensored 7B Q5_K_M
73 145 πŸ† 1 πŸ€ͺ 53.80 🌢🌢 28.09 ❄ 146 πŸ† Marx 3B Q5_1
74 35 134 🧠 80.47 πŸ‘Œ 15.28 89 LLaMA-2 LoRA Assemble 7B Q5_K_M
75 21 151 ⭐🧠 82.26 🧊 5.96 β™» 76 LLaMA 2 Delphi v0.2e 7B (link broken) Q5_1
76 146 πŸ† 2 πŸ€ͺ 53.80 🌢🌢 28.09 ❄ 146 πŸ† EverythingLM 3B Q5_1
77 57 110 πŸ“– 77.94 πŸ‘Œ 13.90 ✳ 110 Beluga Limarp 7B Q5_K_M
78 47 123 πŸ“– 79.38 πŸ‘Œ 15.13 100 Kimiko 7B Q5_K_M
79 84 79 πŸ“– 72.00 πŸ‘Œ 16.93 ✳ 120 Pygmalion 7B Q5_1
80 66 102 πŸ“– 76.15 πŸ‘Œ 14.98 ✳ 112 LLaMA-2 Instruct 32K 7B Q5_K_M (ext. context maybe broken)
81 38 136 🧠 80.13 πŸ‘Œ 13.84 93 LLaMA-2 Mistral 7B Q5_K_M
82 79 88 πŸ“– 72.81 🌢🌢 25.14 93 WizardMath V1.0 7B Q5_K_M
83 86 80 πŸ“– 71.83 🌢🌢 29.93 β™» 81 Airoboros GPT4 2.0 LLaMA-2 7B Q5_K_M
84 148 πŸ† 9 πŸ€ͺ 53.63 🌢🌢 25.02 ❄ 139 πŸ† Open LLaMA Open Instruct 7B Q8_0
85 77 95 πŸ“– 73.56 🌢 18.64 107 MythoLogic Mini 7B Q5_K_M
86 73 100 πŸ“– 74.65 πŸ‘Œ 17.27 107 Pygmalion 2 7B Q5_K_M
87 27 156 ⭐🧠 81.80 🧊 6.10 β™» 68 LLaMA-2 Chat 7B Q5_1
88 122 44 πŸ€” 61.46 πŸ‘Œ 17.13 ❄ 139 Nous Yarn 128K 7B Q5_K_M (ext. context maybe broken)
89 78 97 πŸ“– 72.87 🌢 23.16 89 Luna AI 7B Q8_0
90 50 131 πŸ“– 79.03 πŸ‘Œ 16.06 89 ELYZA Jp LLaMA-2 7B Q5_K_M
91 64 115 πŸ“– 76.90 πŸ‘Œ 17.85 93 Medusa 1.3 7B Q5_K_M
92 96 78 πŸ€” 68.32 πŸ‘Œ 17.45 ✳ 118 LLaMA 7B Q8_0
93 56 126 πŸ“– 78.11 🧊 11.82 105 Tulpar Limarp 7B Q5_K_M
94 97 77 πŸ€” 68.03 🌢 19.27 ✳ 114 Pygmalion Vicuna 7B Q5_K_M
95 130 39 πŸ€ͺ 60.08 πŸ‘Œ 17.07 ❄ 142 OpenLLaMA v2 7B Q5_K_M
96 61 125 πŸ“– 77.19 πŸ‘Œ 17.68 β™» 82 ELYZA Jp LLaMA-2 Instruct 7B Q5_K_M
97 72 112 πŸ“– 74.83 πŸ‘Œ 15.80 105 StableBeluga 7B Q5_K_M
98 39 154 🧠 80.13 🧊 6.42 β™» 70 Photolens LLaMA 2 Langchain Chat 7B Q5_1
99 36 159 🧠 80.36 🧊 5.14 β™» 65 LLaMA-2 Chat Code Cherry Pop 7B Q5_K_M
100 162 πŸ† 8 πŸ€ͺ 52.07 🌢🌢 23.99 ❄ 148 πŸ† OpenLLaMA Open Instruct v2 7B Q8_0
101 126 52 πŸ€” 60.66 🌢🌢 25.23 ✳ 110 Airoboros GPT4 1.4 7B Q5_K_M
102 132 45 πŸ€ͺ 59.91 🌢 20.93 ❄ 123 CodeLLaMA 7B Q5_K_M
103 151 πŸ† 23 πŸ€ͺ 53.40 🌢🌢 23.31 ❄ 131 πŸ† Puma 3B Q5_1
104 139 38 πŸ€ͺ 57.32 πŸ‘Œ 17.20 ❄ 146 AlpacaCielo 2 8K 7B Q5_K_M (ext. context maybe broken)
105 100 85 πŸ€” 67.22 πŸ‘Œ 15.73 ✳ 120 Nous Yarn 64K 7B Q5_K_M
106 144 πŸ† 34 πŸ€ͺ 54.26 🌢 19.70 ❄ 132 πŸ† Deacon 3B Q5_0
107 75 122 πŸ“– 74.31 πŸ‘Œ 16.95 96 GOAT Community 7B Q5_1
108 107 84 πŸ€” 65.21 🌢🌢 27.30 β™» 86 Lunaboros 7B Q4_K_M
109 58 144 πŸ“– 77.48 🧊 11.91 β™» 80 LLaMA-2 32K 7B Q5_K_M (ext. context maybe broken)
110 106 87 πŸ€” 65.21 🌢🌢 26.56 β™» 88 Lunaboros LimaRP 7B Q4_K_M
111 143 43 πŸ€ͺ 54.78 🌢 21.74 ✳ 121 OpenLLaMA 7B Q5_K_M
112 98 99 πŸ€” 67.80 🌢🌢 27.62 β™» 66 Airoboros GPT4 2.0 7B Q5_K_M
113 71 133 πŸ“– 75.52 πŸ‘Œ 15.09 93 Tulpar v0 7B Q4_0
114 62 145 πŸ“– 77.13 🧊 10.80 β™» 84 Tsukasa 7B Q5_K_M
115 108 91 πŸ€” 64.86 🌢🌢 24.63 β™» 88 Chinese Alpaca 2 7B Q5_K_S
116 51 160 πŸ“– 79.03 🧊 4.15 β™» 60 MedLLaMA-2 Chat 7B Q5_K_S
117 76 132 πŸ“– 73.56 πŸ‘Œ 15.32 92 Guanaco Uncensored 7B Q5_K_M
118 92 113 πŸ€” 69.12 πŸ‘Œ 16.32 104 Metharme 7B Q5_1
119 53 161 πŸ“– 78.74 🧊 5.81 β™» 46 Trurl 2 Polish 7B Q5_1
120 65 152 πŸ“– 76.56 🧊 6.02 β™» 76 Merak v2 7B Q5_K_M
121 153 47 πŸ€ͺ 53.17 🌢 18.76 ❄ 129 Mamba GPT v4 3B Q5_1
122 133 72 πŸ€ͺ 59.10 πŸ‘Œ 16.39 ❄ 123 Hermes LLongMA 2 8K 7B Q5_1 (ext. context maybe broken)
123 136 69 πŸ€ͺ 58.41 🌢 18.47 ✳ 119 Leo Hessianai Chat 7B Q5_K_M
124 70 150 πŸ“– 75.52 🧊 9.05 β™» 72 Vicuna v1.5 16K 7B Q5_K_M (ext. context maybe broken)
125 112 101 πŸ€” 63.19 🌢🌢 25.41 β™» 69 Airoboros GPT4 m2.0 7B Q5_K_M
126 119 94 πŸ€” 62.21 🌢🌢 24.39 β™» 85 Airoboros GPT4 m2.0 LLaMA-2 7B Q5_K_M
127 135 76 πŸ€ͺ 58.76 🌢🌢 26.48 93 WizardLM Uncensored 7B Q5_K_M
128 104 114 πŸ€” 65.44 πŸ‘Œ 15.84 104 ALMA Pretrain 7B Q5_K_M
129 85 141 πŸ“– 71.95 πŸ‘Œ 14.51 β™» 75 Chinese LLaMA-2 7B Q5_K
130 131 92 πŸ€ͺ 60.02 πŸ‘Œ 17.83 ✳ 109 Vicuna CoT 7B Q5_K_M
131 74 162 πŸ“– 74.31 🧊 4.48 β™» 49 LLaMA-2 Silverlin. Verilog 7B Q4_K_M
132 95 137 πŸ€” 68.49 🧊 13.02 94 LLaMA-2 Galleon 7B Q5_K_M
133 147 75 πŸ€ͺ 53.69 πŸ‘Œ 14.68 ❄ 128 Marx V2 3B Q4_1
134 90 147 πŸ€” 71.54 🧊 8.45 β™» 78 StableBeluga Samantha V3 7B Q4_0
135 155 71 πŸ€ͺ 53.00 🌢 19.39 ✳ 116 OpenLLaMA 3B Q5_1
136 149 81 πŸ€ͺ 53.57 🧊 11.93 ❄ 130 OpenLLaMA v2 3B Q5_0
137 125 111 πŸ€” 60.89 🌢 22.61 β™» 72 MAmmoTH 7B Q5_K_M
138 165 64 πŸ€ͺ 51.15 πŸ‘Œ 17.84 ✳ 121 OpenBuddy OpenLLaMA v10 3B Q5_0
139 120 118 πŸ€” 61.69 πŸ‘Œ 17.11 97 Tulu 7B Q5_K_M
140 113 127 πŸ€” 62.96 πŸ‘Œ 13.98 98 WizardCoder Python V1.0 7B Q5_K_M
141 150 89 πŸ€ͺ 53.46 πŸ‘Œ 17.04 ✳ 117 Griffin 3B (link broken) Q4_1
142 124 124 πŸ€” 61.12 🧊 11.78 107 CodeLLaMA Instruct 7B Q5_K_M
143 140 105 πŸ€ͺ 57.09 πŸ‘Œ 17.44 103 CodeLLaMA Python 7B Q5_K_M
144 129 120 πŸ€ͺ 60.25 🧊 13.56 108 Gorilla 7B Q5_K_M
145 160 86 πŸ€ͺ 52.30 🌢 21.09 106 WizardVicuna Uncens Instr PL 3B Q5_1
146 134 121 πŸ€ͺ 58.99 🧊 11.40 ✳ 110 LLaMA-2 KO Chat 7B Q5_1
147 114 149 πŸ€” 62.79 🧊 6.70 β™» 74 Pandalyst V1.1 7B Q5_K_M
148 142 119 πŸ€ͺ 56.22 🧊 12.27 ✳ 110 Mamba GPT v2 3B Q5_1
149 116 153 πŸ€” 62.44 🧊 6.43 β™» 72 LLaMA-2 KO 7B Q5_1
150 157 106 πŸ€ͺ 52.94 πŸ‘Œ 18.05 98 Open LLaMA 7B Q5_1
151 115 158 πŸ€” 62.67 🧊 5.93 β™» 65 Based 7B Q5_K_M
152 137 135 πŸ€ͺ 58.12 🧊 10.29 103 PMC LLaMA 7B Q4_0
153 161 108 πŸ€ͺ 52.19 🧊 11.77 ✳ 115 Alpachino Baichuan Instruction 7B Q5_0
154 138 140 πŸ€ͺ 57.66 🧊 13.01 β™» 85 LMSYS Vicuna 1.5 7B Q5_1
155 141 138 πŸ€ͺ 56.85 πŸ‘Œ 15.00 β™» 88 Vicuna v1.5 7B Q5_K_M
156 128 157 πŸ€” 60.43 🧊 6.49 β™» 58 Dolphin LLaMA-2 7B Q5_K_M
157 123 163 πŸ€” 61.18 🧊 2.81 β™» 52 Scarlett 7B Q5_K_M
158 166 116 πŸ€ͺ 50.81 🧊 10.78 ✳ 114 Baichuan 7B Q5_1
159 127 165 πŸ€” 60.60 🧊 3.90 β™» 45 Tulu Uncensored TV Alpaca 7B (link broken) Q5_1
160 152 143 πŸ€ͺ 53.34 πŸ‘Œ 13.97 β™» 75 Orca Mini 3B Q5_1
161 154 142 πŸ€ͺ 53.11 🧊 13.15 β™» 78 Komt LLaMA-2 Chat 7B Q5_K_M
162 156 146 πŸ€ͺ 52.94 🧊 8.64 β™» 80 OpenLLaMA Odia 3B Q5_1
163 164 139 πŸ€ͺ 51.50 🧊 12.99 89 LLaMA Deus v3 7B Q4_0
164 158 148 πŸ€ͺ 52.88 🧊 8.69 β™» 73 Open Cabrita 3B Q5_1
165 163 164 πŸ€ͺ 51.50 🧊 1.97 β™» 47 WizardLM 7B Q5_K_M
166 159 170 πŸ€ͺ 52.42 🧊 0.00 β™» 0 LLongMA 2 7B Q5_1 (ext. context maybe broken)
167 167 167 πŸ€ͺ 47.58 🧊 1.14 β™» 10 TinyLLaMA Chat v0.2 1B Q5_K_M
168 168 168 πŸ€ͺ 47.58 🧊 0.00 β™» 0 PY007 TinyLLaMA Chat v0.2 1B Q8_0
169 171 166 πŸ€ͺ 42.28 🧊 1.64 β™» 16 ToolLLaMA 7B Q5_1
170 169 169 πŸ€ͺ 47.58 🧊 0.00 β™» 0 LongChat v1.5 32K 7B Q5_K_M (ext. context maybe broken)
171 170 171 πŸ€ͺ 47.58 🧊 0.00 β™» 0 LMSYS LongChat 1.5 32k 7B Q5_1 (ext. context maybe broken)

13B Models

2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html

Rank ALC-IQ Rank ERP Rank ALC-IQ ERP Score ERP Var Score Model
πŸ₯‡ 1 πŸŽ“ 2 πŸ† 13 ⭐🧠 93.20 🌢🌢 26.59 ❄ 147 πŸ₯‡πŸŽ“πŸ† MLewdBoros LRPSGPT 2Char 13B Q5_K_M
πŸ₯‡ 2 πŸŽ“ 1 πŸ† 20 ⭐🧠 93.43 🌢🌢 27.08 ❄ 140 πŸ₯‡πŸŽ“πŸ† Athena v1 13B Q5_K_M
πŸ₯‡ 3 πŸŽ“ 16 πŸ† 5 ⭐🧠 91.88 🌢🌢 27.82 ❄ 149 πŸ₯‡πŸŽ“πŸ† MLewdBoros 13B Q5_K_M
πŸ₯‡ 4 πŸŽ“ 5 πŸ† 27 ⭐🧠 92.97 🌢🌢 26.10 ❄ 141 πŸ₯‡πŸŽ“πŸ† Airoboros 2.1 13B Q5_K_M
πŸ₯‡ 5 πŸŽ“ 20 πŸ† 17 ⭐🧠 91.36 🌢🌢 29.75 ❄ 136 πŸ₯‡πŸŽ“πŸ† Pygmalion 2 SuperCOT 13B Q5_K_M
πŸ₯‡ 6 πŸŽ“ 14 πŸ† 32 ⭐🧠 92.22 🌢 25.59 ❄ 145 πŸ₯‡πŸŽ“πŸ† ReMM Mistral 13B Q5_K_M
πŸ₯‡ 7 38 πŸ† 4 ⭐🧠 89.98 🌢🌢 28.91 ❄ 145 πŸ₯‡πŸ† Slerpeno 13B Q5_K_M
πŸ₯‡ 8 πŸŽ“ 12 πŸ† 39 ⭐🧠 92.51 🌢🌢 26.47 ❄ 133 πŸ₯‡πŸŽ“πŸ† Amethyst 13B Q5_K_M
πŸ₯‡ 9 πŸŽ“ 22 πŸ† 31 ⭐🧠 91.07 🌢🌢 28.20 ❄ 133 πŸ₯‡πŸŽ“πŸ† ReMM v2 13B Q5_K_M
πŸ₯‡ 10 πŸŽ“ 13 πŸ† 43 ⭐🧠 92.51 🌢🌢 26.96 ✳ 129 πŸ₯‡πŸŽ“πŸ† Amethyst Mistral 13B Q4_K_S
πŸ₯‡ 11 πŸŽ“ 4 55 ⭐🧠 93.03 🌢 24.94 ❄ 136 πŸ₯‡πŸŽ“ MythoMix 13B Q5_K_M
πŸ₯‡ 12 32 πŸ† 25 ⭐🧠 90.26 🌢🌢 29.09 ❄ 134 πŸ₯‡πŸ† AppleSauce 13B Q5_K_M
πŸ₯‡ 13 πŸŽ“ 18 πŸ† 46 ⭐🧠 91.53 🌢🌢 26.83 ✳ 127 πŸ₯‡πŸŽ“πŸ† MythoMakiseMerged 13B Q5_K_M
πŸ₯‡ 14 45 πŸ† 16 ⭐🧠 89.52 🌢🌢 26.95 ❄ 144 πŸ₯‡πŸ† MLewd V2-1 015 13B Q4_K_S
πŸ₯‡ 15 40 πŸ† 22 ⭐🧠 89.92 🌢🌢 25.69 ❄ 156 πŸ₯‡πŸ† Spicyboros 2.2_2 13B Q5_K_M
πŸ₯‡ 16 31 πŸ† 33 ⭐🧠 90.32 🌢🌢 26.75 ❄ 136 πŸ₯‡πŸ† Airoboros Creative lmoe 13B Q5_K_M
πŸ₯‡ 17 47 πŸ† 21 🧠 89.34 🌢🌢 27.02 ❄ 139 πŸ₯‡πŸ† Athena v2 13B Q5_K_M
πŸ₯‡ 18 29 πŸ† 44 ⭐🧠 90.38 🌢🌢 28.19 ✳ 126 πŸ₯‡πŸ† ReMM v2.2 13B Q5_K_M
πŸ₯‡ 19 πŸŽ“ 27 πŸ† 48 ⭐🧠 90.44 🌢🌢 26.31 ✳ 131 πŸ₯‡πŸŽ“πŸ† OpenRP 13B Q5_K_M
πŸ₯‡ 20 28 πŸ† 47 ⭐🧠 90.44 🌢 23.88 ❄ 145 πŸ₯‡πŸ† Redmond Puffin 13B Q5_1
πŸ₯‡ 21 65 πŸ† 6 🧠 88.65 🌢🌢 28.26 ❄ 147 πŸ₯‡πŸ† MLewd v2-2 13B Q5_K_M
πŸ₯‡ 22 πŸŽ“ 17 66 ⭐🧠 91.65 🌢🌢 28.64 ✳ 119 πŸ₯‡πŸŽ“ ReMM 0.65 SLERP 13B Q5_K_M
πŸ₯‡ 23 πŸŽ“ 24 58 ⭐🧠 90.90 🌢🌢 27.80 ✳ 124 πŸ₯‡πŸŽ“ ReMM v2.1 13B Q5_K_M
πŸ₯‡ 24 πŸŽ“ 6 80 ⭐🧠 92.86 🌢🌢 26.11 ✳ 122 πŸ₯‡πŸŽ“ MythoMax Kimiko V2 13B Q5_K_M
πŸ₯‡ 25 33 πŸ† 52 ⭐🧠 90.21 🌢🌢 27.40 ✳ 125 πŸ₯‡πŸ† MLewdBoros SuperCOT 13B Q5_K_M
πŸ₯‡ 26 39 πŸ† 53 ⭐🧠 89.92 🌢🌢 29.90 ✳ 121 πŸ₯‡πŸ† BerrySauce 13B Q5_K_M
πŸ₯‡ 27 34 62 ⭐🧠 90.21 🌢 23.34 ❄ 139 πŸ₯‡ Stheno 1.3 13B Q5_K_M
πŸ₯ˆ 28 89 πŸ† 2 πŸ“– 87.33 🌢🌢 29.01 ❄ 151 πŸ₯ˆπŸ† MLewd V2-1 13B Q5_K_M
πŸ₯ˆ 29 46 56 ⭐🧠 89.46 🌢 25.36 ❄ 134 πŸ₯ˆ MLewd Chat 13B Q5_K_M
πŸ₯ˆ 30 86 πŸ† 10 πŸ“– 87.38 🌢🌢 26.75 ❄ 147 πŸ₯ˆπŸ† Unholy v1 10L 13B Q5_K_M
πŸ₯ˆ 31 30 79 ⭐🧠 90.38 🌢 23.05 ❄ 136 πŸ₯ˆ Magpie 13B Q5_K_M
πŸ₯ˆ 32 55 πŸ† 50 🧠 88.94 🌢🌢 26.38 ✳ 129 πŸ₯ˆπŸ† Pygmaltion 2 SuperCOT weighted 13B Q5_K_M
πŸ₯ˆ 33 87 πŸ† 14 πŸ“– 87.38 🌢🌢 26.75 ❄ 147 πŸ₯ˆπŸ† Unholy v1 13B Q5_K_M
πŸ₯ˆ 34 93 πŸ† 11 πŸ“– 87.10 🌢🌢 26.69 ❄ 147 πŸ₯ˆπŸ† Unholy v1 12L 13B Q5_K_M
πŸ₯ˆ 35 πŸŽ“ 19 102 ⭐🧠 91.36 🌢 25.64 ✳ 118 πŸ₯ˆπŸŽ“ ReMM v2 Kimiko v2 13B Q5_K_M
πŸ₯ˆ 36 72 πŸ† 40 πŸ“– 88.25 🌢 23.56 ❄ 157 πŸ₯ˆπŸ† ZettaPi 13B Q5_K_M
πŸ₯ˆ 37 42 76 ⭐🧠 89.86 🌢 24.61 ✳ 131 πŸ₯ˆ UndiMix v3 13B Q5_K_M
πŸ₯ˆ 38 54 67 🧠 89.00 πŸ‘Œ 22.64 ❄ 142 πŸ₯ˆ Airoboros L2 2.2.1 13B Q5_K_M
πŸ₯ˆ 39 64 57 🧠 88.65 🌢 25.53 ❄ 133 πŸ₯ˆ Teknium OpenHermes 13B Q5_K_S
πŸ₯ˆ 40 50 77 🧠 89.23 🌢🌢 26.10 ✳ 123 πŸ₯ˆ ReMM v2 Variant 13B Q5_K_M
πŸ₯ˆ 41 52 78 🧠 89.06 🌢 22.80 ❄ 137 πŸ₯ˆ Airoboros 2.2 13B Q5_K_M
πŸ₯ˆ 42 96 πŸ† 26 πŸ“– 86.87 🌢🌢 26.75 ❄ 138 πŸ₯ˆπŸ† ReMM 13B Q5_K_M
πŸ₯ˆ 43 98 πŸ† 24 πŸ“– 86.69 🌢 25.45 ❄ 157 πŸ₯ˆπŸ† MLewd V2-1 050 13B Q4_K_S
πŸ₯ˆ 44 35 100 ⭐🧠 90.21 🌢 25.09 ✳ 121 πŸ₯ˆ Chronos Beluga 13B Q5_K_M
πŸ₯ˆ 45 113 πŸ† 8 πŸ“– 86.00 🌢🌢 26.33 ❄ 163 πŸ₯ˆπŸ† Stheno Inverted 1.2 13B Q5_K_M
πŸ₯ˆ 46 101 πŸ† 23 πŸ“– 86.64 🌢🌢 27.03 ❄ 138 πŸ₯ˆπŸ† MLewd v2 13B Q5_K_M
πŸ₯ˆ 47 πŸŽ“ 3 141 ⭐🧠 93.20 🌢 25.64 109 πŸ₯ˆπŸŽ“ MythoMaxKurisu 13B Q5_K_M
πŸ₯ˆ 48 60 75 🧠 88.71 🌢 23.25 ❄ 136 πŸ₯ˆ Spicyboros 2.2 13B Q4_K_M
πŸ₯ˆ 49 58 82 🧠 88.82 🌢🌢 26.30 ✳ 120 πŸ₯ˆ Chronolima Airo Grad 13B Q5_K_M
πŸ₯ˆ 50 56 89 🧠 88.94 🌢 24.99 ✳ 124 πŸ₯ˆ UndiMix v4 13B Q5_K_M
πŸ₯ˆ 51 πŸŽ“ 9 146 ⭐🧠 92.57 🌢 24.20 111 πŸ₯ˆπŸŽ“ Huginn v1.2 13B Q5_K_M
πŸ₯ˆ 52 πŸŽ“ 15 142 ⭐🧠 92.17 πŸ‘Œ 18.04 ✳ 129 πŸ₯ˆπŸŽ“ Huginn 13B Q5_K_M
πŸ₯ˆ 53 πŸŽ“ 10 148 ⭐🧠 92.57 🌢 24.20 111 πŸ₯ˆπŸŽ“ ReMM SLERP 13B Q5_K_M
πŸ₯ˆ 54 124 πŸ† 12 πŸ“– 85.54 🌢🌢 26.24 ❄ 156 πŸ₯ˆπŸ† Holomax 13B Q5_K_M
πŸ₯ˆ 55 104 πŸ† 38 πŸ“– 86.58 🌢 25.55 ❄ 139 πŸ₯ˆπŸ† ReMM Lion 13B Q5_K_M
πŸ₯ˆ 56 94 πŸ† 51 πŸ“– 87.04 🌢 24.89 ❄ 137 πŸ₯ˆπŸ† LLaMA-2 Chat Uncensored 13B Q5_1
πŸ₯ˆ 57 πŸŽ“ 11 151 ⭐🧠 92.57 🌢 24.20 111 πŸ₯ˆπŸŽ“ MythoMax 13B Q5_K_M
πŸ₯ˆ 58 102 πŸ† 45 πŸ“– 86.64 🌢 25.29 ❄ 138 πŸ₯ˆπŸ† Chronos Hermes 2 13B Q5_K_M
πŸ₯ˆ 59 71 85 πŸ“– 88.31 🌢🌢 29.46 113 πŸ₯ˆ Blind Test Janus 13B Q5_1
πŸ₯ˆ 60 πŸŽ“ 23 143 ⭐🧠 91.01 πŸ‘Œ 22.52 ✳ 119 πŸ₯ˆπŸŽ“ Emerhyst 13B Q5_K_M
πŸ₯ˆ 61 81 74 πŸ“– 87.56 🌢🌢 28.50 ✳ 117 πŸ₯ˆ Pygmalion 2 SuperCOT2 13B Q5_K_M
πŸ₯ˆ 62 36 129 ⭐🧠 90.09 🌢🌢 27.53 101 πŸ₯ˆ OpenRP SuperCOT 13B Q5_K_M
πŸ₯ˆ 63 57 108 🧠 88.94 πŸ‘Œ 22.06 ❄ 133 πŸ₯ˆ Orca Mini v3 13B Q5_K_M
πŸ₯ˆ 64 146 πŸ† 7 πŸ€” 84.22 🌢🌢 29.48 ❄ 140 πŸ₯ˆπŸ† OpenAssistant LLaMA-2 8k Orca 13B Q5_K_M (ext. context maybe broken)
πŸ₯ˆ 65 πŸŽ“ 7 179 ⭐🧠 92.86 🌢 23.29 105 πŸ₯ˆπŸŽ“ MythoMax Kimiko Mix 13B Q5_K_M
πŸ₯ˆ 66 49 131 🧠 89.29 🌢 24.63 114 πŸ₯ˆ Airolima Chronos Grad 13B Q5_K_M
πŸ₯ˆ 67 135 πŸ† 28 πŸ“– 84.85 🌢🌢 26.33 ❄ 139 πŸ₯ˆπŸ† qCammel L2 13B Q5_K_M
πŸ₯‰ 68 151 πŸ† 9 πŸ€” 83.76 🌢🌢 27.86 ❄ 142 πŸ₯‰πŸ† Athena v3 13B Q5_K_M
πŸ₯‰ 69 62 117 🧠 88.65 πŸ‘Œ 17.56 ❄ 136 πŸ₯‰ Stheno Chat 13B Q5_K_M
πŸ₯‰ 70 59 122 🧠 88.71 🌢🌢 26.05 111 πŸ₯‰ Unholy v1.1 13B Q5_K_M
πŸ₯‰ 71 48 136 🧠 89.34 πŸ‘Œ 21.03 ✳ 125 πŸ₯‰ StableBeluga 13B Q5_K_M
πŸ₯‰ 72 109 69 πŸ“– 86.12 🌢🌢 25.86 ✳ 126 πŸ₯‰ Airoboros GPT4 1.4.1 13B Q5_K_M
πŸ₯‰ 73 91 92 πŸ“– 87.15 🌢 24.94 ✳ 124 πŸ₯‰ Mistral PetroLimaRP v3 12B Q5_K_M
πŸ₯‰ 74 168 πŸ† 1 πŸ€” 80.82 🌢🌢 28.11 ❄ 164 πŸ₯‰πŸ† Legerdemain 13B Q5_K_M
πŸ₯‰ 75 83 106 πŸ“– 87.44 πŸ‘Œ 21.62 ❄ 134 πŸ₯‰ Pygmalion 2 13B Q5_K_M
πŸ₯‰ 76 πŸŽ“ 25 176 ⭐🧠 90.67 🧊 14.06 ✳ 125 πŸ₯‰πŸŽ“ Inkbot 4k 13B Q4_K_M
πŸ₯‰ 77 144 πŸ† 36 πŸ€” 84.27 🌢🌢 25.85 ❄ 138 πŸ₯‰πŸ† Stheno Inverted 13B Q5_K_M
πŸ₯‰ 78 122 63 πŸ“– 85.60 🌢 25.19 ✳ 131 πŸ₯‰ MegaMix S1 13B Q5_K_M
πŸ₯‰ 79 130 πŸ† 54 πŸ“– 85.14 🌢🌢 25.70 ❄ 133 πŸ₯‰πŸ† ReMM PIPPA 13B Q5_K_M
πŸ₯‰ 80 125 60 πŸ“– 85.48 🌢🌢 26.64 ✳ 125 πŸ₯‰ ReMM v1 LRPSGPT 2Char 13B Q5_K_M
πŸ₯‰ 81 πŸŽ“ 21 185 ⭐🧠 91.07 πŸ‘Œ 16.73 ✳ 117 πŸ₯‰πŸŽ“ LlongOrca 16K 13B Q5_K_M (ext. context maybe broken)
πŸ₯‰ 82 66 132 🧠 88.65 πŸ‘Œ 22.06 ✳ 124 πŸ₯‰ Kimiko V2 13B Q5_K_M
πŸ₯‰ 83 153 πŸ† 29 πŸ€” 83.24 🌢🌢 28.87 ❄ 133 πŸ₯‰πŸ† ReMM S Kimiko v2 13B Q5_K_M
πŸ₯‰ 84 79 118 πŸ“– 87.90 πŸ‘Œ 22.34 ✳ 126 πŸ₯‰ Kimiko 13B Q5_K_M
πŸ₯‰ 85 120 72 πŸ“– 85.77 🌢 24.29 ❄ 132 πŸ₯‰ GradientPutri MegaMix S1 13B Q5_K_S
πŸ₯‰ 86 51 155 🧠 89.23 🌢 23.27 114 πŸ₯‰ Vigogne 2 13B Q5_K_M
πŸ₯‰ 87 76 126 πŸ“– 88.02 🌢🌢 30.14 β™» 94 πŸ₯‰ Airochronos 13B Q5_K_M
πŸ₯‰ 88 171 πŸ† 15 πŸ€” 80.36 🌢🌢 26.06 ❄ 157 πŸ₯‰πŸ† Huginn v3 13B Q5_K_M
πŸ₯‰ 89 103 99 πŸ“– 86.58 🌢 24.99 ✳ 122 πŸ₯‰ Saiga 2 13B Q5_K
πŸ₯‰ 90 74 134 πŸ“– 88.13 🌢 25.43 111 πŸ₯‰ MythoLogic 13B Q5_K_M
πŸ₯‰ 91 172 πŸ† 18 πŸ€” 80.36 🌢🌢 26.06 ❄ 157 πŸ₯‰πŸ† Huginn v4 13B Q5_K_M
πŸ₯‰ 92 119 83 πŸ“– 85.83 🌢 25.14 ✳ 125 πŸ₯‰ Mythalion 13B Q5_K_M
πŸ₯‰ 93 173 πŸ† 19 πŸ€” 80.36 🌢🌢 26.06 ❄ 157 πŸ₯‰πŸ† Huginn v4.5 13B Q5_K_M
πŸ₯‰ 94 44 175 ⭐🧠 89.57 πŸ‘Œ 19.85 ✳ 118 πŸ₯‰ Redmond Puffin v1.3 13B Q5_K_M
95 189 πŸ† 3 πŸ€” 76.96 🌢🌢 29.33 ❄ 146 πŸ† Airoboros 2.1 YaRN 64K 13B Q5_K_M
96 134 71 πŸ“– 84.85 🌢 23.97 ❄ 135 Guanaco Uncensored 13B Q5_K_M
97 115 95 πŸ“– 86.00 🌢 23.87 ✳ 126 Firefly v1.2 13B Q5_K_M
98 121 90 πŸ“– 85.71 🌢🌢 26.04 ✳ 120 Fireflx v1.2 13B Q5_K_M
99 82 137 πŸ“– 87.56 πŸ‘Œ 21.44 ✳ 124 Chronos Hermes v2 13B Q5_K_M
100 169 πŸ† 34 πŸ€” 80.70 🌢🌢 25.78 ❄ 142 πŸ† MLewd v1 13B Q5_K_M
101 141 68 πŸ€” 84.62 🌢 25.33 ✳ 131 Camel Platypus 2 13B Q5_K_M
102 63 163 🧠 88.65 πŸ‘Œ 22.40 115 MXLewdMini 13B Q5_K_M
103 77 149 πŸ“– 88.02 🌢🌢 31.72 β™» 78 Airoboros GPT4 2.0 13B Q5_K_M
104 99 123 πŸ“– 86.69 πŸ‘Œ 21.37 ✳ 127 h2oGPT 13B (link broken) Q5_K_M
105 163 πŸ† 49 πŸ€” 81.51 🌢 23.18 ❄ 150 πŸ† Huginn v1.3 13B Q5_K_M
106 177 πŸ† 35 πŸ€” 79.78 🌢🌢 26.07 ❄ 136 πŸ† MegaMix T1 13B Q5_K_M
107 69 165 🧠 88.42 πŸ‘Œ 20.60 ✳ 119 Stheno 1.8 13B Q5_K_M
108 176 πŸ† 37 πŸ€” 79.84 🌢🌢 26.15 ❄ 136 πŸ† MLewd v1-7 TRY2 13B Q5_K_M
109 67 170 🧠 88.59 πŸ‘Œ 18.80 ✳ 120 Stable Platypus 2 13B Q5_K_M
110 108 121 πŸ“– 86.41 πŸ‘Œ 19.82 ❄ 132 Chronos 2 13B Q5_K_M
111 184 πŸ† 30 πŸ€” 78.00 🌢🌢 28.05 ❄ 135 πŸ† AlpacaCielo 13B Q5_K_M
112 175 πŸ† 41 πŸ€” 79.95 🌢 24.02 ❄ 150 πŸ† LLongMA-2 Storysummarizer 13B Q5_K_M (ext. context maybe broken)
113 75 162 πŸ“– 88.13 🌢 24.39 108 Chronoboros Grad 13B Q5_K_M
114 80 157 πŸ“– 87.62 🌢🌢 31.26 β™» 73 Airoboros GPT4 2.0 LLaMA-2 13B Q5_K_M
115 68 172 🧠 88.48 πŸ‘Œ 18.52 ✳ 120 UndiMix v2 13B Q5_K_M
116 145 81 πŸ€” 84.22 🌢 23.08 ❄ 136 Platypus 2 13B Q5_K_M
117 πŸŽ“ 8 246 ⭐🧠 92.80 🧊 12.13 β™» 81 πŸŽ“ LLaMA-2 Ensemble v6 13B Q5_K_M
118 110 124 πŸ“– 86.12 🌢🌢 32.04 β™» 92 Thorns 13B Q5_K_M
119 129 104 πŸ“– 85.20 🌢 22.98 ✳ 129 StableBeluga Instruct PL Lora 13B Q5_1
120 170 65 πŸ€” 80.53 🌢 23.01 ❄ 141 Gywy Chinese v1 13B Q5_1
121 133 111 πŸ“– 84.97 🌢🌢 26.09 114 Hermes Kimiko 13B Q5_K_M
122 111 138 πŸ“– 86.06 πŸ‘Œ 22.40 ✳ 121 Chronohermes Grad 13B Q5_K_M
123 178 59 πŸ€” 79.26 🌢 25.63 ❄ 132 MLewd 13B Q5_K_M
124 70 192 πŸ“– 88.31 🧊 13.44 ✳ 118 LLaMA-2 Chat AYT 13B Q5_K_M
125 157 93 πŸ€” 82.95 🌢🌢 29.00 112 Crestfall FrankenMon 13B Q5_K_M
126 164 86 πŸ€” 81.22 🌢 24.78 ✳ 125 MegaMix A1 13B Q5_K_M
127 53 222 🧠 89.06 🧊 14.21 102 TerraMix 16K 13B Q5_K_M (ext. context maybe broken)
128 181 73 πŸ€” 78.69 🌢🌢 28.48 ✳ 117 Frank Uncensored 13B Q5_K_M
129 43 243 ⭐🧠 89.69 🧊 11.99 β™» 89 WizardLM 1.2 PL 13B Q5_1
130 155 110 πŸ€” 83.06 🌢 23.71 ✳ 124 Frankensteins Monster 13B Q4_K_S
131 πŸŽ“ 26 265 ⭐🧠 90.61 🧊 5.88 β™» 70 πŸŽ“ PuddleJumper 13B Q5_K_M
132 167 96 πŸ€” 80.93 🌢🌢 26.54 116 OniiChat Hermes Limarp 13B Q5_K_M
133 97 181 πŸ“– 86.75 πŸ‘Œ 19.23 115 LLaMA-2 Mistral 13B Q5_K_M
134 37 253 ⭐🧠 90.09 🧊 8.02 β™» 78 WizardLM v1.2 13B Q4_0
135 126 147 πŸ“– 85.37 🌢 23.61 113 Nous Hermes 13B Q5_K_M
136 78 208 πŸ“– 87.90 🧊 15.16 109 UndiMix v1 13B Q5_K_M
137 128 150 πŸ“– 85.20 πŸ‘Œ 22.53 ✳ 117 Nous Hermes LLaMA-2 13B Q5_K_M
138 174 98 πŸ€” 80.18 🌢 24.93 ✳ 123 Stheno 13B Q5_K_M
139 149 128 πŸ€” 83.93 🌢 22.69 ✳ 122 LLaMA-2 Guanaco 13B Q4_1
140 132 156 πŸ“– 85.02 πŸ‘Œ 19.86 ✳ 122 EverythingLM V3 16K 13B Q5_K_M (ext. context maybe broken)
141 41 267 ⭐🧠 89.92 🧊 4.50 β™» 60 Speechless LLaMA-2 13B Q5_K_M
142 88 211 πŸ“– 87.33 🧊 16.72 105 UltraLM v2.0 13B Q5_K_M
143 84 216 πŸ“– 87.44 🧊 13.63 106 Spring Dragon 13B Q5_K_M
144 216 61 πŸ€ͺ 70.97 πŸ‘Œ 22.40 ❄ 153 Nous Yarn 128K 13B Q5_K_M (ext. context maybe broken)
145 73 234 πŸ“– 88.25 🧊 12.34 β™» 94 LLaMA-2 LoRA Assemble 13B Q5_K_M
146 199 84 πŸ€” 74.83 πŸ‘Œ 19.55 ❄ 148 Dans RetroRodeo 13B Q5_K_M
147 197 88 πŸ€” 75.75 🌢 24.38 ✳ 126 Nous Hermes Writer 13B Q4_K_S
148 185 105 πŸ€” 77.76 🌢 23.59 ✳ 125 WizardMath V1.0 13B Q5_K_M
149 188 103 πŸ€” 77.13 πŸ‘Œ 22.62 ✳ 131 Nous Yarn 64K 13B Q5_K_M
150 222 64 πŸ€ͺ 68.43 🌢 23.52 ❄ 138 Chronos Hermes SuperHOT 8K 13B Q5_1 (ext. context maybe broken)
151 85 230 πŸ“– 87.38 🧊 11.67 102 Marcoroni 13B Q5_K_M
152 143 161 πŸ€” 84.33 🌢 24.71 108 Hermes LimaRP 13B Q4_K_M
153 105 207 πŸ“– 86.58 πŸ‘Œ 16.93 106 Mythical Destroyer V2 13B (link broken) Q5_K_M
154 158 144 πŸ€” 82.72 🌢 23.49 115 Chronorctypus Limarobormes 13B Q5_K_M
155 112 203 πŸ“– 86.06 πŸ‘Œ 18.32 109 OpenChat v3.2 13B Q5_K_M
156 127 186 πŸ“– 85.31 🧊 14.83 ✳ 119 OpenOrcaxOpenChat Preview2 13B Q5_1
157 150 160 πŸ€” 83.87 πŸ‘Œ 22.34 ✳ 117 Synthia 13B Q5_K_M
158 61 269 🧠 88.71 🧊 4.34 β™» 46 Iubaris V3 13B Q5_K_M
159 117 202 πŸ“– 85.89 πŸ‘Œ 17.65 110 LosslessMegaCoder Mini 13B Q5_K_M
160 95 231 πŸ“– 86.92 🧊 12.34 β™» 99 LLaMA-2 Chat Limarp v2 13B Q5_K_M
161 191 116 πŸ€” 76.27 🌢 25.12 ✳ 117 Manticore SuperHOT 8K 13B Q5_K_M (ext. context maybe broken)
162 123 198 πŸ“– 85.60 🧊 16.16 112 OpenBuddy LLaMA-2 v11.1 13B Q5_K_M
163 183 127 πŸ€” 78.17 🌢🌢 30.71 β™» 93 Airoboros GPT4 m2.0 13B Q5_K_M
164 182 135 πŸ€” 78.51 πŸ‘Œ 17.65 ❄ 132 Holodeck 1 13B Q5_K
165 154 169 πŸ€” 83.12 πŸ‘Œ 21.42 116 ALMA Pretrain 13B Q5_K_M
166 260 πŸ† 42 πŸ€ͺ 61.52 🌢 24.76 ❄ 143 πŸ† Hermes LLongMA 2 8K 13B Q5_1 (ext. context maybe broken)
167 100 239 πŸ“– 86.64 🧊 12.88 β™» 91 OpenOrca STX 13B Q5_K_M
168 92 251 πŸ“– 87.15 🧊 7.45 β™» 86 Samantha 1.11 13B Q5_K_M
169 207 114 πŸ€” 72.12 πŸ‘Œ 19.75 ❄ 136 Vicuna 1.3 PL 13B Q5_1
170 136 200 πŸ“– 84.74 πŸ‘Œ 19.43 107 CalliopeDS 13B Q5_K_M
171 210 115 πŸ€ͺ 71.89 πŸ‘Œ 19.80 ❄ 135 MAmmoTH 13B Q5_K_M
172 90 260 πŸ“– 87.33 🧊 7.97 β™» 74 Speechless Hermes Orca Plat WizLM 13B Q5_K_M
173 137 204 πŸ“– 84.74 πŸ‘Œ 16.80 109 LLaMA-2 Ensemble v5 13B Q5_K_M
174 235 87 πŸ€ͺ 66.24 🌢 24.15 ✳ 127 LLaMA SuperCOT 13B Q5_K_M
175 165 173 πŸ€” 81.16 🌢 22.88 111 Stheno 1.2 13B Q5_K_M
176 252 70 πŸ€ͺ 63.82 πŸ‘Œ 22.56 ❄ 142 Chronos Hermes 13B Q5_K_M
177 195 140 πŸ€” 75.92 🌢🌢 28.25 β™» 93 Airoboros GPT4 m2.0 LLaMA-2 13B Q5_K_M
178 159 184 πŸ€” 82.60 πŸ‘Œ 17.85 116 Dans QuestionableCocktail 2 13B Q4_1
179 234 94 πŸ€ͺ 66.42 🌢🌢 28.52 113 Airoboros GPT4 1.3 13B Q5_1
180 131 219 πŸ“– 85.08 πŸ‘Œ 18.80 β™» 90 Tsukasa Limarp 16K 13B Q5_K_M (ext. context maybe broken)
181 106 249 πŸ“– 86.52 🧊 9.47 β™» 83 Mythical Destroyer 13B Q5_K_M
182 107 250 πŸ“– 86.46 🧊 8.65 β™» 86 Athena-tmp 13B Q5_K_M
183 139 212 πŸ€” 84.62 πŸ‘Œ 16.80 103 OpenOrca Platypus 2 13B Q5_K_M
184 244 91 πŸ€ͺ 64.63 🌢🌢 26.10 ✳ 119 MythoBoros 13B Q5_K_M
185 160 195 πŸ€” 82.32 🧊 14.65 114 OpenOrcaxOpenChat 2 LangChain Chat 13B Q5_1
186 118 247 πŸ“– 85.83 🧊 11.03 β™» 84 ChatAYT Lora Assamble Marcoroni 13B Q5_K_M
187 161 199 πŸ€” 82.20 πŸ‘Œ 18.20 110 Vicuna v1.5 16K 13B Q5_K_M (ext. context maybe broken)
188 180 177 πŸ€” 78.86 πŸ‘Œ 21.18 113 YuLan Chat 2 13B Q5_K_M
189 116 254 πŸ“– 86.00 🧊 8.08 β™» 78 LLaMA-2 Chinese Chat 13B Q5_1
190 114 257 πŸ“– 86.00 🧊 6.74 β™» 78 LLaMA-2 13B Q5_K_M
191 142 225 πŸ€” 84.56 🧊 12.67 103 LLaMA-2 LangChain Chat 13B Q5_K_S
192 156 209 πŸ€” 83.01 πŸ‘Œ 17.96 104 Sentdex WSB GPT 13B Q5_K_M
193 202 159 πŸ€” 72.81 🌢 23.57 111 Manticore 13B Q5_K_M
194 242 112 πŸ€ͺ 65.15 🌢 23.25 ✳ 125 Wizard Vicuna Uncensored SuperHOT 8k 13B Q5_K_S (ext. context maybe broken)
195 138 237 πŸ“– 84.74 🧊 8.75 β™» 98 LLaMA-2 Chat 13B Q5_1
196 248 107 πŸ€ͺ 64.23 πŸ‘Œ 22.36 ✳ 131 MyhtoLogic 13B Q5_K_M
197 247 109 πŸ€ͺ 64.29 🌢 22.80 ✳ 127 Guanaco 13B Q5_K_M
198 254 101 πŸ€ͺ 63.54 πŸ‘Œ 20.82 ❄ 136 Chronos 13B Q5_K_M
199 179 193 πŸ€” 79.03 🌢 23.06 β™» 94 Dans MythsteryModel 13B Q5_K_M
200 212 154 πŸ€ͺ 71.49 🌢 24.43 110 JanniesBasedLigma 13B Q5_K_M
201 213 153 πŸ€ͺ 71.43 πŸ‘Œ 21.15 ✳ 121 Tsukasa Limarp 13B Q5_K_M
202 204 164 πŸ€” 72.47 πŸ‘Œ 19.03 ✳ 122 CodeLLaMA Oasst SFT V10 13B Q5_K_M
203 261 97 πŸ€ͺ 60.77 🌢 23.57 ✳ 127 OpenLLaMA 13B Q5_K_M
204 243 119 πŸ€ͺ 64.75 πŸ‘Œ 21.20 ✳ 129 Chronos WizardLM UC SCOT ST 13B Q5_K_M
205 140 245 πŸ€” 84.62 🧊 8.72 β™» 90 Luban 13B Q5_K_M
206 245 120 πŸ€ͺ 64.63 🌢 23.07 ✳ 123 OpenBuddy OpenLLaMA v7 13B Q4_K
207 230 139 πŸ€ͺ 67.17 🌢 23.79 114 WizardLM V1.0 Uncensored 13B Q5_K_M
208 238 133 πŸ€ͺ 65.78 🌢 24.05 115 Chimera 13B Q5_K_M
209 186 197 πŸ€” 77.59 πŸ‘Œ 18.79 110 Barcenas 13B Q5_K_M
210 224 152 πŸ€ͺ 68.15 πŸ‘Œ 21.75 ✳ 120 Chronos SuperHOT 8K 13B Q5_K_M (ext. context maybe broken)
211 152 241 πŸ€” 83.35 🧊 11.90 β™» 92 Trurl 2 Polish 13B Q5_1
212 206 178 πŸ€” 72.29 πŸ‘Œ 20.37 114 CAMEL Combined Data 13B Q5_K_M
213 218 167 πŸ€ͺ 69.99 🌢 23.13 111 Minotaur 13B Q5_K_M
214 201 189 πŸ€” 73.10 πŸ‘Œ 19.90 110 Tulu 13B Q5_K_M
215 265 113 πŸ€ͺ 57.89 🌢🌢 26.16 113 Petra Instruct 13B Q5_K_M
216 166 232 πŸ€” 80.99 🧊 14.00 β™» 95 Trurl 2 Polish Instruct 13B Q5_1
217 147 256 πŸ€” 84.10 🧊 6.80 β™» 78 Codeup Alpha 13B Q5_K_M
218 253 130 πŸ€ͺ 63.77 πŸ‘Œ 22.02 ✳ 125 Alpacino SuperCOT 13B Q4_0
219 223 168 πŸ€ͺ 68.20 🌢 24.15 107 Hypermantis 13B Q5_K_M
220 225 166 πŸ€ͺ 68.03 πŸ‘Œ 21.14 ✳ 119 MedAlpaca 13B Q5_1
221 208 187 πŸ€ͺ 72.00 πŸ‘Œ 22.22 107 Heegyu LIMA2 13B Q5_1
222 148 259 πŸ€” 84.10 🧊 6.80 β™» 78 h2oGPT Chat 13B (link broken) Q5_K_M
223 236 158 πŸ€ͺ 66.24 πŸ‘Œ 22.27 ✳ 118 Dans PersonalityEngine 13B Q5_1
224 264 125 πŸ€ͺ 59.39 πŸ‘Œ 22.47 ✳ 124 Nous-Hermes 13B Q4_0
225 198 205 πŸ€” 75.06 πŸ‘Œ 20.50 β™» 96 Vicuna 1.5 13B Q5_0
226 220 183 πŸ€ͺ 68.61 πŸ‘Œ 22.27 109 WizardMega 13B Q5_K_M
227 194 215 πŸ€” 75.92 🧊 16.27 101 Chinese Alpaca 2 13B Q5_K
228 231 171 πŸ€ͺ 66.71 πŸ‘Œ 21.16 ✳ 117 OpenBuddy LLaMA-2 v8.1 13B Q3_K
229 162 255 πŸ€” 81.68 🧊 6.74 β™» 79 CodeUp LLaMA-2 Chat 13B Q4_K_M
230 232 174 πŸ€ͺ 66.59 🌢 24.15 105 HyperMantis 13B Q5_K_M
231 196 218 πŸ€” 75.75 πŸ‘Œ 19.48 β™» 89 WizardLM 1.0 Uncensored 13B Q5_K_M
232 200 214 πŸ€” 74.71 πŸ‘Œ 19.62 β™» 94 LLaMA-2 Instruct Uncensored 13B Q5_0
233 258 145 πŸ€ͺ 62.90 🌢 23.79 113 Carl 13B Q5_K_M
234 221 190 πŸ€ͺ 68.49 πŸ‘Œ 18.52 113 LLaMA 13B Q5_K_M
235 190 228 πŸ€” 76.61 🧊 14.21 β™» 98 Manticore Chat Pyg 13B Q5_K_M
236 217 196 πŸ€ͺ 70.56 🧊 15.06 113 Chinese LLaMA-2 13B Q5_K
237 192 229 πŸ€” 76.15 🧊 14.96 β™» 94 Manticore Chat Pyg SuperHOT 8K 13B Q5_K_M (ext. context maybe broken)
238 187 236 πŸ€” 77.59 🧊 16.13 β™» 82 Vicuna v1.5 13B Q5_K_M
239 227 194 πŸ€ͺ 67.68 πŸ‘Œ 20.98 107 CAMEL Role Playing Data 13B Q5_K_M
240 215 210 πŸ€ͺ 71.08 πŸ‘Œ 21.44 β™» 91 BlueMethod 13B Q5_K_M
241 209 220 πŸ€ͺ 72.00 🧊 14.58 101 OpenBuddy Atom v9 13B Q5_K
242 239 188 πŸ€ͺ 65.67 πŸ‘Œ 22.30 104 Ouroboros 13B Q5_K_M
243 193 244 πŸ€” 75.98 🧊 12.95 β™» 81 LoKuS 13B Q5_K_M
244 211 226 πŸ€ͺ 71.54 🧊 12.34 103 CodeLLaMA Instruct 13B Q5_K_M
245 251 182 πŸ€ͺ 63.94 πŸ‘Œ 21.87 111 Saiga 13B Q5_1
246 203 242 πŸ€” 72.58 🧊 14.85 β™» 78 Metharme 13B Q5_1
247 219 223 πŸ€ͺ 69.53 🧊 14.42 β™» 100 Pandalyst V1.0 13B Q5_K_M
248 255 180 πŸ€ͺ 63.36 🌢 23.35 103 WizardLM Uncensored 13B Q5_K_M
249 228 213 πŸ€ͺ 67.57 πŸ‘Œ 17.74 101 WizardLM V1.1 13B Q5_K_M
250 250 191 πŸ€ͺ 63.94 🧊 16.40 114 CodeLLaMA Python 13B Q5_K_M
251 229 217 πŸ€ͺ 67.40 πŸ‘Œ 18.80 β™» 91 Asclepius 13B Q5_K_M
252 205 248 πŸ€” 72.41 🧊 11.35 β™» 82 Manticore Chat Pyg Guanaco 13B Q4_K_M
253 214 238 πŸ€ͺ 71.26 🧊 14.02 β™» 91 Vicuna 1.3 German 13B Q5_K_M
254 246 206 πŸ€ͺ 64.46 🧊 14.41 111 CodeLLaMA 13B Q5_K_M
255 237 221 πŸ€ͺ 66.24 🧊 15.54 β™» 96 Vicuna 1.3 13B Q5_1
256 259 201 πŸ€ͺ 62.62 πŸ‘Œ 21.82 β™» 96 Wizard Vicuna Uncensored 13B Q5_K_M
257 240 224 πŸ€ͺ 65.38 🧊 15.67 β™» 94 WizardLM 1.0 13B Q5_K_M
258 241 233 πŸ€ͺ 65.21 🧊 16.20 β™» 88 Based 13B Q5_K_M
259 226 252 πŸ€ͺ 67.86 🧊 10.49 β™» 73 Nexus Raven 13B Q5_K_M
260 249 240 πŸ€ͺ 64.00 🧊 14.32 β™» 86 WizardLM WizardCoder Python V1.0 13B Q4_K_S
261 263 227 πŸ€ͺ 59.56 🧊 14.46 β™» 95 Wizard Vicuna 13B Q5_K_M
262 233 263 πŸ€ͺ 66.42 🧊 8.22 β™» 47 Dolphin LLaMA 13B Q5_K_M
263 262 235 πŸ€ͺ 60.37 🧊 14.37 β™» 90 Vicuna CoT 13B Q5_K_M
264 257 266 πŸ€ͺ 63.31 🧊 4.42 β™» 71 Scarlett 13B Q5_K_M
265 256 268 πŸ€ͺ 63.36 🧊 6.06 β™» 38 Pygmalion 13B Q5_1
266 266 258 πŸ€ͺ 57.14 🧊 10.84 β™» 50 Taiwan LLaMA V1.0 13B Q5_K_M
267 268 262 πŸ€ͺ 56.57 🧊 9.46 β™» 44 Taiwan LLaMA v1.0 13B Q5_K_M
268 267 264 πŸ€ͺ 56.91 🧊 7.65 β™» 60 BigTranslate 13B Q4_K_M
269 270 261 πŸ€ͺ 53.46 🧊 8.80 β™» 60 Komt LLaMA-2 13B Q5_K_M
270 269 271 πŸ€ͺ 53.92 🧊 1.27 β™» 11 LMSYS Vicuna 1.5 16k 13B Q5_1 (ext. context maybe broken)
271 274 270 πŸ€ͺ 50.12 🧊 1.79 β™» 45 Stable Vicuna 13B Q5_K_M
272 271 274 πŸ€ͺ 52.42 🧊 0.00 β™» 0 EverythingLM V2 16K 13B Q4_K_S (ext. context maybe broken)
273 275 272 πŸ€ͺ 47.70 🧊 0.62 β™» 7 Chatxu (L2?) 13B Q4_0
274 272 276 πŸ€ͺ 52.42 🧊 0.00 β™» 0 LLongMA 2 13B Q5_1 (ext. context maybe broken)
275 273 275 πŸ€ͺ 50.81 🧊 0.00 β™» 0 EverythingLM 16K 13B Q5_K_M (ext. context maybe broken)
276 276 273 πŸ€ͺ 47.58 🧊 0.00 β™» 0 Dans CreepingSenseOfDoom 13B Q5_K_M

20B to 33B Models

2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html

Rank ALC-IQ Rank ERP Rank ALC-IQ ERP Score ERP Var Score Model
πŸ₯‡ 1 πŸŽ“ 1 πŸ† 4 ⭐🧠 92.74 🌢🌢 30.23 ❄ 144 πŸ₯‡πŸŽ“πŸ† MLewd ReMM Chat 20B Q5_K_M
πŸ₯‡ 2 πŸŽ“ 5 πŸ† 3 ⭐🧠 91.53 🌢🌢 29.62 ❄ 148 πŸ₯‡πŸŽ“πŸ† MLewd ReMM Chat Inverted 20B Q5_K_M
πŸ₯‡ 3 πŸŽ“ 4 13 ⭐🧠 91.65 🌢🌢 27.81 ✳ 132 πŸ₯‡πŸŽ“ MXLewd 20B Q5_K_M
πŸ₯‡ 4 8 πŸ† 10 ⭐🧠 90.44 🌢 25.27 ❄ 148 πŸ₯‡πŸ† Emerhyst 20B Q5_K_M
πŸ₯‡ 5 πŸŽ“ 3 19 ⭐🧠 92.17 🌢🌢 27.77 ✳ 127 πŸ₯‡πŸŽ“ Airoboros 2.1 33B Q4_K_M
πŸ₯‡ 6 18 πŸ† 6 πŸ“– 88.54 🌢🌢 32.89 ❄ 136 πŸ₯‡πŸ† MM ReMM 20B Q5_K_M
πŸ₯ˆ 7 13 15 🧠 89.57 🌢 24.24 ❄ 146 πŸ₯ˆ Huginn 5 Prototype 19B Q4_K_S
πŸ₯ˆ 8 9 25 ⭐🧠 90.32 🌢 24.50 ✳ 134 πŸ₯ˆ Airoboros GPT4 1.4 33B Q4_K_M
πŸ₯ˆ 9 21 πŸ† 11 πŸ“– 88.02 🌢🌢 27.55 ❄ 141 πŸ₯ˆπŸ† Enterredaas 33B Q4_1
πŸ₯ˆ 10 16 17 🧠 88.71 🌢🌢 27.40 ✳ 132 πŸ₯ˆ Airochronos 33B Q5_K_M
πŸ₯ˆ 11 22 14 πŸ“– 85.94 🌢 24.43 ❄ 146 πŸ₯ˆ LLaMA-2 BlockTri Frankenstein 22B Q4_K_M
πŸ₯ˆ 12 24 πŸ† 12 πŸ“– 85.43 🌢 25.92 ❄ 142 πŸ₯ˆπŸ† Lazarus 30B Q4_K_M
πŸ₯ˆ 13 14 26 🧠 89.17 🌢 23.94 ✳ 134 πŸ₯ˆ LLaMA SuperCOT 30B Q4_K_M
πŸ₯ˆ 14 23 16 πŸ“– 85.77 🌢 25.68 ❄ 139 πŸ₯ˆ Chronoboros 33B Q5_K_M
πŸ₯ˆ 15 πŸŽ“ 2 44 ⭐🧠 92.57 πŸ‘Œ 22.62 115 πŸ₯ˆπŸŽ“ SuperPlatty 30B Q4_K_M
πŸ₯‰ 16 40 πŸ† 2 πŸ€” 82.55 🌢🌢 35.79 ❄ 153 πŸ₯‰πŸ† COTHuginn 4.5 19B Q5_K_M
πŸ₯‰ 17 7 42 ⭐🧠 90.73 πŸ‘Œ 22.27 121 πŸ₯‰ Platypus 2 70B Q2_K
πŸ₯‰ 18 38 πŸ† 5 πŸ€” 82.83 🌢🌢 27.42 ❄ 147 πŸ₯‰πŸ† LLaMA 2 Ari03 28B (link broken) Q5_1
πŸ₯‰ 19 17 33 🧠 88.71 🌢 26.86 117 πŸ₯‰ Airoboros GPT4 2.0 33B Q5_K_M
πŸ₯‰ 20 12 39 🧠 89.75 πŸ‘Œ 22.10 ✳ 122 πŸ₯‰ GPlatty 30B Q4_K_M
πŸ₯‰ 21 29 21 πŸ“– 84.62 🌢🌢 28.12 ✳ 123 πŸ₯‰ Saiga 30B Q5_1
22 11 46 ⭐🧠 89.92 🌢 25.07 β™» 105 Airoboros GPT4 m2.0 33B Q5_K_M
23 33 20 πŸ“– 83.47 🌢 26.87 ✳ 130 Fin LLaMA 33B Q4_K_M
24 28 27 πŸ“– 84.62 🌢🌢 28.68 115 CAMEL Combined Data 33B Q4_K_M
25 26 31 πŸ“– 84.85 🌢🌢 27.18 117 Vigogne Instruct 33B Q4_K_M
26 27 32 πŸ“– 84.79 πŸ‘Œ 23.34 ✳ 128 LLaMA-2 Frankensteined 22B Q4_K_M
27 πŸŽ“ 6 58 ⭐🧠 90.84 🧊 17.17 β™» 96 πŸŽ“ Platypus 30B Q4_K_M
28 35 24 πŸ€” 83.12 πŸ‘Œ 22.96 ❄ 143 Guanaco 33B Q4_K_M
29 10 54 ⭐🧠 90.09 🧊 18.98 β™» 106 LLaMA 30B Q5_K_M
30 15 49 🧠 89.00 πŸ‘Œ 22.41 114 VicUnlocked LoRA 30B Q4_K_M
31 41 18 πŸ€” 82.55 🌢🌢 31.32 ✳ 122 Carl 33B Q4_K_M
32 54 πŸ† 7 πŸ€ͺ 75.81 🌢 25.84 ❄ 156 πŸ† Bacchus (L2*) 22B Q4_0
33 60 πŸ† 1 πŸ€ͺ 73.44 🌢🌢 37.23 ❄ 166 πŸ† MythoMax 33B Q4_K_M
34 42 23 πŸ€” 82.14 🌢🌢 29.65 119 Frank Uncensored 33B Q5_K_M
35 25 45 πŸ“– 85.14 πŸ‘Œ 23.68 111 Lazarus Instruct PL 30B Q4_1
36 34 37 πŸ€” 83.35 🌢 26.54 β™» 109 WizardLM Uncensored 30B Q5_K_M
37 47 22 πŸ€” 79.49 πŸ‘Œ 22.11 ❄ 147 Spicyboros C 2.2 34B Q4_K_M
38 59 πŸ† 9 πŸ€ͺ 73.79 🌢🌢 29.12 ❄ 136 πŸ† Wizard Vicuna LLaMA-2 22B Q4_K_M
39 39 36 πŸ€” 82.72 🌢 25.67 116 Vicuna v1.3 33B Q4_K_M
40 19 60 πŸ“– 88.48 🧊 9.54 β™» 77 Upstage LLaMA Instruct 30B Q5_K_M
41 45 29 πŸ€” 80.07 πŸ‘Œ 22.99 ✳ 134 CodeLLaMA 34B Q4_K_M
42 63 πŸ† 8 πŸ€ͺ 72.47 🌢🌢 27.72 ❄ 142 πŸ† Daydreamer v3 22B Q5_K_M
43 20 61 πŸ“– 88.13 🧊 11.63 β™» 71 Hippogriff 30B Q4_K_M
44 32 47 πŸ“– 83.87 πŸ‘Œ 23.12 111 Tulu 30B Q5_K_M
45 30 51 πŸ“– 84.27 🧊 18.75 112 Dans PersonalityEngine 30B Q4_1
46 50 28 πŸ€ͺ 78.92 πŸ‘Œ 21.98 ❄ 141 Huginn Prototype 22B Q4_K_M
47 43 38 πŸ€” 81.16 🌢 25.16 114 WizardLM V1.0 Uncensored 33B Q4_K_M
48 31 55 πŸ“– 83.99 🧊 18.92 β™» 86 Based 30B Q5_K_M
49 52 30 πŸ€ͺ 78.34 🌢 24.40 ✳ 128 Wizard Vicuna Uncensored 30B Q5_K_M
50 49 35 πŸ€” 79.15 πŸ‘Œ 21.51 ✳ 131 CodeLLaMA Python 34B Q4_K_M
51 44 41 πŸ€” 80.18 πŸ‘Œ 20.68 ✳ 126 Chronos 33B Q5_K_M
52 36 52 πŸ€” 83.06 🧊 19.81 β™» 106 Epsilon 30B Q4_K_M
53 37 59 πŸ€” 83.06 🧊 8.47 β™» 101 MindFlay 22B Q4_0
54 56 40 πŸ€ͺ 74.48 πŸ‘Œ 20.11 ✳ 126 Airoboros C 2.1 34B Q5_K_M
55 46 53 πŸ€” 79.55 🧊 18.91 β™» 110 Airoboros C 2.2 34B Q4_K_M
56 62 34 πŸ€ͺ 73.16 🌢 25.26 119 LLaMA 2 DayDreamer V1 22B Q5_K_M
57 57 43 πŸ€ͺ 74.48 πŸ‘Œ 20.11 ✳ 126 Airoboros C 2.1b 34B Q5_K_M
58 53 50 πŸ€ͺ 76.04 🧊 17.46 121 CodeLLaMA Instruct 34B Q4_K_M
59 48 56 πŸ€” 79.38 🧊 13.38 β™» 103 Synthia v1.2 34B Q4_K_M
60 61 48 πŸ€ͺ 73.21 πŸ‘Œ 20.97 117 Phind CodeLLaMA v1 34B Q4_K_S
61 55 57 πŸ€ͺ 74.83 🧊 14.03 β™» 102 Airobors C 2.1 34B Q4_K_M
62 51 62 πŸ€ͺ 78.63 🧊 5.38 β™» 70 Scarlett 33B Q4_K_M
63 58 65 πŸ€ͺ 74.19 🧊 4.61 β™» 48 Samantha 1.11 CodeLLaMA 34B Q4_K_M
64 64 63 πŸ€ͺ 60.08 🧊 5.86 β™» 69 BrainToast 20B Q5_K_M
65 66 64 πŸ€ͺ 51.15 🧊 2.59 β™» 56 WizardLM 30B Q4_K_M
66 65 66 πŸ€ͺ 52.42 🧊 0.00 β™» 0 Airoboros GPT4 1.4 SuperHOT 8K 33B Q4_K_M (ext. context maybe broken)

About Extended Context (8K, 16K, 32K)

As you may have noticed, there are a few models currently (2023-08-09) that have a bad ALC-IQ and even worse ERP-Score. A few of these models are:

  • LLaMA-2 32K 7B
  • LMSYS LongChat 1.5 32k 7B
  • LLongMA 2 7B
  • Hermes LLongMA 2 8K (L2) 7B

And a few others. The reason for this is simple: The GGML file format is a mess. And even after the new GGUF file format arrived, people sometimes seem to fail to properly quantize the context extended models into a GGUF file. The benchmark does sometimes not have proper results for these models because:

  • The GGUF file creator messed up somehow (for instance: converted a GGML file to GGUF without the proper rope scaling settings).
  • For GGML Files:
    • A special setting is required in llama.cpp to enable compatibility with these models. Called --rope-freq-base and --rope-freq-scale. These need to be set to the right magic values corresponding to the model at hand.
    • Determining these magic ROPE values is not hard, if they were properly documented. But only few pages on huggingfaces that provide GGML file quantizations document these. TheBloke really tries hard, but sometimes even the original model uploaders don't provide any information about the right values.
    • And most importantly: It would require carrying meta data out of band along with each file for me. I don't have the time figuring out the right values. And I believe most users won't ever bother either.
    • There are also other important options which are not mentioned yet, but are crucial for some GGML files to work properly:
      • --gqa (grouped-query attention factor) is one of these, it is required to set to the magic value 8 for LLaMA 2 70B to work.
      • --rms-norm-eps is an epsilon value for inference of the models. This value is different bewettn LLaMA 1 (1e-6) and LLaMA 2 (1e-5). It makes a difference in how well either model works. The original default 1e-6 was actually replaced recently by 5e-6 which is half way between the both values, and suppsedly should work fine. But in my own tests I saw quite some variance in the performance of the quantized GGML models, which were kind of contradicting to what was stated on llama.cpp. But I decided to not dig further, because there is still too much sampling randomness involved in the ALC-IQ (beta). Which I will eventually fix.

Ranking Changelog

Size Rank Model
3B-7B 1 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Mistral Claude Chat 7B Q5_K_M
3B-7B 2 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„) Mistral ClaudeLimaRP v3 7B Q5_K_M
3B-7B 3 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Mistral RP 0.1 7B Q5_K_M
3B-7B 4 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Synthia v1.3 7B Q5_K_M
3B-7B 5 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Samantha Mistral 7B Q5_K_M
3B-7B 6 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„) Mistral v0.1 7B Q5_K_M
3B-7B 8 / 171 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) PetrolLM 7B Q5_K_M
3B-7B 13 / 171 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MistRP v1.1 7B Q8_0
3B-7B 17 / 171 πŸ₯‡πŸŽ“(⭐🧠) Kimiko Mistral 7B Q5_K_M
3B-7B 18 / 171 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Mistral Instruct v0.1 7B Q5_K_M
3B-7B 42 / 171 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Samantha Mistral Instruct 7B Q5_K_M
3B-7B 81 / 171 LLaMA-2 Mistral 7B Q5_K_M
3B-7B 91 / 171 Medusa 1.3 7B Q5_K_M
3B-7B 106 / 171 πŸ†(❄) Deacon 3B Q5_0
3B-7B 123 / 171 Leo Hessianai Chat 7B Q5_K_M
3B-7B 147 / 171 Pandalyst V1.1 7B Q5_K_M
13B 6 / 276 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„) ReMM Mistral 13B Q5_K_M
13B 8 / 276 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Amethyst 13B Q5_K_M
13B 10 / 276 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) Amethyst Mistral 13B Q4_K_S
13B 13 / 276 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) MythoMakiseMerged 13B Q5_K_M
13B 60 / 276 πŸ₯ˆπŸŽ“(⭐🧠) Emerhyst 13B Q5_K_M
13B 68 / 276 πŸ₯‰πŸ†(β„πŸŒΆπŸŒΆ) Athena v3 13B Q5_K_M
13B 73 / 276 πŸ₯‰ Mistral PetroLimaRP v3 12B Q5_K_M
13B 78 / 276 πŸ₯‰ MegaMix S1 13B Q5_K_M
13B 85 / 276 πŸ₯‰(❄) GradientPutri MegaMix S1 13B Q5_K_S
13B 106 / 276 πŸ†(β„πŸŒΆπŸŒΆ) MegaMix T1 13B Q5_K_M
13B 107 / 276 Stheno 1.8 13B Q5_K_M
13B 126 / 276 MegaMix A1 13B Q5_K_M
13B 133 / 276 LLaMA-2 Mistral 13B Q5_K_M
13B 142 / 276 UltraLM v2.0 13B Q5_K_M
13B 199 / 276 Dans MythsteryModel 13B Q5_K_M
13B 247 / 276 Pandalyst V1.0 13B Q5_K_M
13B 259 / 276 Nexus Raven 13B Q5_K_M
20B-34B 4 / 66 πŸ₯‡πŸ†(β­πŸ§ β„) Emerhyst 20B Q5_K_M
Size Rank Model
3B-7B 34 / 155 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Wizard Vicuna Uncensored 7B Q5_K_M
3B-7B 36 / 155 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Airoboros GPT4 1.4.1 7B Q5_K_M
3B-7B 42 / 155 πŸ₯‰πŸ†(🌢🌢) Frank Uncensored 7B Q5_K_M
3B-7B 49 / 155 πŸ₯‰πŸ†(🌢🌢) WizardLM V1.0 Uncensored 7B Q5_K_M
3B-7B 52 / 155 πŸ₯‰ Airoboros L2 2.2.1 7B Q5_K_M
3B-7B 53 / 155 πŸ†(β„πŸŒΆπŸŒΆ) Guanaco 7B Q5_K_M
3B-7B 66 / 155 (🌢🌢) Xwin LM V0.1 7B Q5_K_M
3B-7B 112 / 155 ALMA Pretrain 7B Q5_K_M
3B-7B 113 / 155 (🌢🌢) WizardLM Uncensored 7B Q5_K_M
3B-7B 115 / 155 Vicuna CoT 7B Q5_K_M
3B-7B 123 / 155 Tulu 7B Q5_K_M
3B-7B 126 / 155 MAmmoTH 7B Q5_K_M
3B-7B 129 / 155 Gorilla 7B Q5_K_M
3B-7B 134 / 155 Based 7B Q5_K_M
3B-7B 149 / 155 WizardLM 7B Q5_K_M
3B-7B 151 / 155 TinyLLaMA Chat v0.2 1B Q5_K_M
13B 13 / 259 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) ReMM v2.2 13B Q5_K_M
13B 16 / 259 πŸ₯‡πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Athena v2 13B Q5_K_M
13B 33 / 259 πŸ₯ˆπŸ†(❄) ZettaPi 13B Q5_K_M
13B 34 / 259 πŸ₯ˆ(❄) Airoboros L2 2.2.1 13B Q5_K_M
13B 65 / 259 πŸ₯‰(❄) Stheno Chat 13B Q5_K_M
13B 66 / 259 πŸ₯‰(🌢🌢) Airoboros GPT4 1.4.1 13B Q5_K_M
13B 68 / 259 πŸ₯‰πŸŽ“(⭐🧠) Inkbot 4k 13B Q4_K_M
13B 93 / 259 MXLewdMini 13B Q5_K_M
13B 115 / 259 (🌢🌢) Frank Uncensored 13B Q5_K_M
13B 127 / 259 EverythingLM V3 16K 13B Q5_K_M
13B 134 / 259 (❄) Dans RetroRodeo 13B Q5_K_M
13B 150 / 259 ALMA Pretrain 13B Q5_K_M
13B 157 / 259 LLaMA SuperCOT 13B Q5_K_M
13B 158 / 259 (❄) MAmmoTH 13B Q5_K_M
13B 163 / 259 (❄) Chronos Hermes 13B Q5_K_M
13B 170 / 259 (🌢🌢) MythoBoros 13B Q5_K_M
13B 179 / 259 Guanaco 13B Q5_K_M
13B 181 / 259 Manticore 13B Q5_K_M
13B 183 / 259 MyhtoLogic 13B Q5_K_M
13B 185 / 259 Chronos WizardLM UC SCOT ST 13B Q5_K_M
13B 186 / 259 (❄) Chronos 13B Q5_K_M
13B 191 / 259 WizardLM V1.0 Uncensored 13B Q5_K_M
13B 193 / 259 Chimera 13B Q5_K_M
13B 197 / 259 CAMEL Combined Data 13B Q5_K_M
13B 198 / 259 Minotaur 13B Q5_K_M
13B 201 / 259 Tulu 13B Q5_K_M
13B 204 / 259 Hypermantis 13B Q5_K_M
13B 212 / 259 WizardMega 13B Q5_K_M
13B 215 / 259 Manticore Chat Pyg 13B Q5_K_M
13B 224 / 259 CAMEL Role Playing Data 13B Q5_K_M
13B 225 / 259 BlueMethod 13B Q5_K_M
13B 227 / 259 Ouroboros 13B Q5_K_M
13B 231 / 259 WizardLM V1.1 13B Q5_K_M
13B 233 / 259 WizardLM Uncensored 13B Q5_K_M
13B 240 / 259 WizardLM 1.0 13B Q5_K_M
13B 241 / 259 Wizard Vicuna Uncensored 13B Q5_K_M
13B 242 / 259 Based 13B Q5_K_M
13B 245 / 259 Wizard Vicuna 13B Q5_K_M
13B 246 / 259 Vicuna CoT 13B Q5_K_M
13B 254 / 259 Stable Vicuna 13B Q5_K_M
20B-34B 1 / 65 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewd ReMM Chat 20B Q5_K_M
20B-34B 2 / 65 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewd ReMM Chat Inverted 20B Q5_K_M
20B-34B 3 / 65 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) MXLewd 20B Q5_K_M
20B-34B 5 / 65 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MM ReMM 20B Q5_K_M
20B-34B 11 / 65 πŸ₯ˆπŸ†(❄) Lazarus 30B Q4_K_M
20B-34B 12 / 65 πŸ₯ˆ LLaMA SuperCOT 30B Q4_K_M
20B-34B 14 / 65 πŸ₯ˆπŸŽ“(⭐🧠) SuperPlatty 30B Q4_K_M
20B-34B 19 / 65 πŸ₯‰(⭐🧠) GPlatty 30B Q4_K_M
20B-34B 22 / 65 Fin LLaMA 33B Q4_K_M
20B-34B 24 / 65 (🌢🌢) CAMEL Combined Data 33B Q4_K_M
20B-34B 26 / 65 (❄) Guanaco 33B Q4_K_M
20B-34B 28 / 65 πŸŽ“(⭐🧠) Platypus 30B Q4_K_M
20B-34B 29 / 65 VicUnlocked LoRA 30B Q4_K_M
20B-34B 33 / 65 (🌢🌢) Frank Uncensored 33B Q5_K_M
20B-34B 36 / 65 WizardLM Uncensored 30B Q5_K_M
20B-34B 40 / 65 Upstage LLaMA Instruct 30B Q5_K_M
20B-34B 42 / 65 Hippogriff 30B Q4_K_M
20B-34B 43 / 65 Tulu 30B Q5_K_M
20B-34B 46 / 65 WizardLM V1.0 Uncensored 33B Q4_K_M
20B-34B 47 / 65 Based 30B Q5_K_M
20B-34B 48 / 65 Wizard Vicuna Uncensored 30B Q5_K_M
20B-34B 51 / 65 Epsilon 30B Q4_K_M
20B-34B 63 / 65 BrainToast 20B Q5_K_M
20B-34B 64 / 65 WizardLM 30B Q4_K_M
Size Rank Model
3B-7B 16 / 143 πŸ₯ˆπŸŽ“(⭐🧠🌢🌢) Kuchiki 1.1 7B Q5_K_M
3B-7B 51 / 143 Saiga 2 7B Q5_K
3B-7B 117 / 143 WizardCoder Python V1.0 7B Q5_K_M
3B-7B 140 / 143 PY007 TinyLLaMA Chat v0.2 1B Q8_0
13B 23 / 230 πŸ₯‡πŸŽ“(β­πŸ§ β„) Magpie 13B Q5_K_M
13B 25 / 230 πŸ₯ˆ(β­πŸ§ β„) MLewd Chat 13B Q5_K_M
13B 26 / 230 πŸ₯ˆπŸ†(🌢🌢) Pygmaltion 2 SuperCOT weighted 13B Q5_K_M
13B 74 / 230 πŸ₯‰ Saiga 2 13B Q5_K
13B 140 / 230 OpenOrca STX 13B Q5_K_M
13B 143 / 230 CalliopeDS 13B Q5_K_M
13B 158 / 230 ChatAYT Lora Assamble Marcoroni 13B Q5_K_M
13B 223 / 230 Taiwan LLaMA v1.0 13B Q5_K_M
20B-34B 1 / 47 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewd ReMM Chat 20B Q5_K_M
20B-34B 2 / 47 πŸ₯‡πŸŽ“(⭐🧠🌢🌢) MLewd ReMM Chat Inverted 20B Q5_K_M
20B-34B 17 / 47 Vigogne Instruct 33B Q4_K_M
20B-34B 27 / 47 Vicuna v1.3 33B Q4_K_M
20B-34B 37 / 47 Airoboros C 2.2 34B Q4_K_M
20B-34B 42 / 47 Synthia v1.2 34B Q4_K_M
  • 2023-09-15 V33
Size Rank Model
3B-7B 1 / 140 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) Kuchiki 7B Q5_K_M
3B-7B 26 / 140 πŸ₯ˆ(❄) LLaMA-2 Coder 7B Q5_K_M
3B-7B 53 / 140 LLaMA-2 LoRA Assemble 7B Q5_K_M
3B-7B 134 / 140 OpenLLaMA Odia 3B Q5_1
13B 1 / 225 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewdBoros LRPSGPT 2Char 13B Q5_K_M
13B 20 / 225 πŸ₯‡πŸ†(⭐🧠🌢🌢) BerrySauce 13B Q5_K_M
13B 47 / 225 πŸ₯ˆ(❄) MLewd Chat 13B Q5_K_M
13B 48 / 225 πŸ₯ˆ(🌢🌢) Pygmalion 2 SuperCOT2 13B Q5_K_M
13B 62 / 225 πŸ₯‰(🌢🌢) ReMM v1 LRPSGPT 2Char 13B Q5_K_M
13B 100 / 225 LLaMA-2 Chat AYT 13B Q5_K_M
13B 116 / 225 LLaMA-2 LoRA Assemble 13B Q5_K_M
13B 225 / 225 Dans CreepingSenseOfDoom 13B Q5_K_M
20B-34B 20 / 41 (❄) Spicyboros C 2.2 34B Q4_K_M
Size Rank Model
3B-7B 40 / 137 πŸ₯‰ Airoboros 2.2 7B Q5_K_M
3B-7B 108 / 137 LLaMA-2 Silverlin. Verilog 7B Q4_K_M
13B 12 / 217 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) OpenRP 13B Q5_K_M
13B 18 / 217 πŸ₯‡πŸ†(⭐🧠🌢🌢) MLewdBoros SuperCOT 13B Q5_K_M
13B 23 / 217 πŸ₯ˆπŸŽ“(⭐🧠) ReMM v2 Kimiko v2 13B Q5_K_M
13B 32 / 217 πŸ₯ˆ(❄) Airoboros 2.2 13B Q5_K_M
13B 37 / 217 πŸ₯ˆ UndiMix v4 13B Q5_K_M
13B 47 / 217 πŸ₯ˆ(⭐🧠🌢🌢) OpenRP SuperCOT 13B Q5_K_M
13B 50 / 217 πŸ₯ˆ(🌢🌢) Unholy v1.1 13B Q5_K_M
20B-34B 26 / 41 Spicyboros C 2.2 34B Q4_K_M
Size Rank Model
3B-7B 35 / 135 πŸ₯‰ Marcoroni 7B Q5_K_M
3B-7B 104 / 135 Chinese LLaMA-2 7B Q5_K
13B 4 / 210 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Pygmalion 2 SuperCOT 13B Q5_K_M
13B 7 / 210 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) AppleSauce 13B Q5_K_M
13B 14 / 210 πŸ₯‡πŸŽ“(⭐🧠🌢🌢) ReMM v2.1 13B Q5_K_M
13B 19 / 210 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) Unholy v1 10L 13B Q5_K_M
13B 20 / 210 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) Unholy v1 13B Q5_K_M
13B 21 / 210 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) Unholy v1 12L 13B Q5_K_M
13B 35 / 210 πŸ₯ˆπŸŽ“(⭐🧠) Huginn v1.2 13B Q5_K_M
13B 55 / 210 πŸ₯‰πŸŽ“(⭐🧠) LlongOrca 16K 13B Q5_K_M
13B 62 / 210 πŸ₯‰πŸ†(β„πŸŒΆπŸŒΆ) Huginn v3 13B Q5_K_M
13B 84 / 210 πŸŽ“(⭐🧠) LLaMA-2 Ensemble v6 13B Q5_K_M
13B 105 / 210 Marcoroni 13B Q5_K_M
13B 125 / 210 LLaMA-2 Ensemble v5 13B Q5_K_M
13B 132 / 210 OpenOrca Platypus 2 13B Q5_K_M
13B 154 / 210 JanniesBasedLigma 13B Q5_K_M
13B 155 / 210 Barcenas 13B Q5_K_M
13B 157 / 210 Tsukasa Limarp 13B Q5_K_M
13B 174 / 210 Chinese Alpaca 2 13B Q5_K
13B 183 / 210 Chinese LLaMA-2 13B Q5_K
Size Rank Model
3B-7B 20 / 134 πŸ₯ˆ Medusa 1.1 7B Q5_K_M
3B-7B 30 / 134 πŸ₯ˆ LosslessMegaCoder Mini 7B Q5_K_M
3B-7B 37 / 134 πŸ₯‰πŸŽ“(⭐🧠) LLaMA-2 PeanutButter v19 R8 7B Q5_K_M
3B-7B 38 / 134 πŸ₯‰(⭐🧠) Befenghuang Vigogne 2 Chat 7B Q5_K_S
3B-7B 41 / 134 πŸ₯‰(❄) Ganchengguang Yoko Japanse v0 7B Q5_K_S
3B-7B 42 / 134 πŸ₯‰ LlongOrca 16K 7B Q5_K_M
3B-7B 45 / 134 πŸ₯‰(🌢🌢) Spicyboros 2.2 7B Q5_K_M
3B-7B 62 / 134 (🌢🌢) Airoboros GPT4 2.0 LLaMA-2 7B Q5_K_M
3B-7B 93 / 134 (🌢🌢) Chinese Alpaca 2 7B Q5_K_S
3B-7B 97 / 134 Guanaco Uncensored 7B Q5_K_M
3B-7B 98 / 134 (❄) Mamba GPT v4 3B Q5_1
3B-7B 102 / 134 (🌢🌢) Airoboros GPT4 m2.0 LLaMA-2 7B Q5_K_M
13B 2 / 195 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewdBoros 13B Q5_K_M
13B 5 / 195 πŸ₯‡πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Spicyboros 2.2_2 13B Q5_K_M
13B 6 / 195 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) ReMM v2 13B Q5_K_M
13B 12 / 195 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MLewd v2-2 13B Q5_K_M
13B 14 / 195 πŸ₯‡πŸŽ“(⭐🧠🌢🌢) ReMM 0.65 SLERP 13B Q5_K_M
13B 15 / 195 πŸ₯‡πŸŽ“(β­πŸ§ β„) Stheno 1.3 13B Q5_K_M
13B 18 / 195 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) Teknium OpenHermes 13B Q5_K_S
13B 19 / 195 πŸ₯‡(⭐🧠🌢🌢) ReMM v2 Variant 13B Q5_K_M
13B 23 / 195 πŸ₯ˆ(β­πŸ§ β„) Spicyboros 2.2 13B Q4_K_M
13B 24 / 195 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Stheno Inverted 1.2 13B Q5_K_M
13B 30 / 195 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) Holomax 13B Q5_K_M
13B 57 / 195 πŸ₯‰(❄) Guanaco Uncensored 13B Q5_K_M
13B 60 / 195 πŸ₯‰ Chronos Hermes v2 13B Q5_K_M
13B 64 / 195 πŸ₯‰πŸ†(β„πŸŒΆπŸŒΆ) Airoboros 2.1 YaRN 64K 13B Q5_K_M
13B 72 / 195 (🌢🌢) Airoboros GPT4 2.0 LLaMA-2 13B Q5_K_M
13B 91 / 195 Nous Hermes LLaMA-2 13B Q5_K_M
13B 125 / 195 Stheno 1.2 13B Q5_K_M
13B 128 / 195 (🌢🌢) Airoboros GPT4 m2.0 LLaMA-2 13B Q5_K_M
13B 180 / 195 Based 13B Q5_K_M
13B 187 / 195 Taiwan LLaMA v1.0 13B Q5_K_M
20B-34B 9 / 40 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) COTHuginn 4.5 19B Q5_K_M
20B-34B 20 / 40 πŸ†(β„πŸŒΆπŸŒΆ) MythoMax 33B Q4_K_M
20B-34B 28 / 40 Based 30B Q4_K_M
  • 2023-09-08 V29
    • The ERP Scores (ERP Score and ERP Variety Score) were completely reworked: The count of lewd words is now related to the total number of lewd words in a response. And the ERP Score is now the average of these and not the median anymore. And the ERP Variety Score was added, which tries to catch the erotic creative lewd word knowledge of a model. The ERP Rank is computed by slightly biasing towards the new ERP Variety Score.
    • Separate ranks for The ALC-IQ and the ERP Scores were introduced. The resulting model rank is now determined by a weighted sum of the ALC-IQ Rank and ERP Rank. Slightly biased towards the ALC-IQ Rank.
    • GGUF results replace the GGML results of a model now. Please note, that sometimes this can result in a model gaining of loosing ranks in the table. This is sadly just the nature of the foating point quantization. It just shows how similar these models and fine tunes are in the core and how sensitive this benchmark is.
    • New Symbols were added to signal good ALC-IQ Ranks (πŸŽ“) and good ERP Ranks (πŸ†). The medals (πŸ₯‡, πŸ₯ˆ and πŸ₯‰) are assigned to multiple ranks, because this ranking can't tell you ultimately which model is actually the best for you. That is not just because there are many known flaws in this benchmark, but also because a large part of your role play experience will also depend on: your expectations, the character card, the prompt annotations and the sampler settings you use.
    • And the following models were added on top of that to the table:
    • Benchmark Results as CSV - Timestamp 20230908_203426
      Size Rank Model
      3B-7B 5 / 123 πŸ₯‡πŸŽ“πŸ†(⭐🧠🌢🌢) Zarablend 7B Q5_K_M
      3B-7B 9 / 123 πŸ₯‡πŸŽ“(β­πŸ§ β„) Zarafusionex 1.1 7B Q5_K_M
      3B-7B 10 / 123 πŸ₯‡πŸŽ“(⭐🧠) Hermes LimaRP 7B Q5_K_M
      3B-7B 12 / 123 πŸ₯‡(⭐🧠) Krakowiak 7B Q4_K_M
      3B-7B 17 / 123 πŸ₯ˆ(🌢🌢) Zarablend MX 7B Q5_K_M
      3B-7B 21 / 123 πŸ₯ˆ Typly Pigeon 7B Q4_K_M
      3B-7B 46 / 123 Kimiko 7B Q5_K_M
      3B-7B 51 / 123 (🌢🌢) Luna AI LLaMA-2 Uncensored 7B Q5_K_M
      3B-7B 58 / 123 Pygmalion 2 7B Q5_K_M
      3B-7B 71 / 123 StableBeluga 7B Q5_K_M
      13B 3 / 177 πŸ₯‡πŸŽ“πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) Slerpeno 13B Q5_K_M
      13B 4 / 177 πŸ₯‡πŸ†(β­πŸ§ β„πŸŒΆπŸŒΆ) MLewd V2-1 015 13B Q4_K_S
      13B 10 / 177 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MLewd V2-1 13B Q5_K_M
      13B 11 / 177 πŸ₯‡πŸŽ“(β­πŸ§ β„) UndiMix v3 13B Q5_K_M
      13B 13 / 177 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MLewd V2-1 050 13B Q4_K_S
      13B 15 / 177 πŸ₯‡πŸ†(β„πŸŒΆπŸŒΆ) MLewd v2 13B Q5_K_M
      13B 20 / 177 πŸ₯ˆπŸ†(β„πŸŒΆπŸŒΆ) ReMM Lion 13B Q5_K_M
      13B 30 / 177 πŸ₯ˆ(⭐🧠) StableBeluga 13B Q5_K_M
      13B 32 / 177 πŸ₯ˆ(❄) Pygmalion 2 13B Q5_K_M
      13B 38 / 177 πŸ₯ˆ(🌢🌢) Mythalion 13B Q5_K_M
      13B 41 / 177 πŸ₯ˆ(🌢🌢) Fireflx v1.2 13B Q5_K_M
      13B 45 / 177 πŸ₯‰πŸ†(β„πŸŒΆπŸŒΆ) ReMM S Kimiko v2 13B Q5_K_M
      13B 60 / 177 (🌢🌢) Thorns 13B Q5_K_M
      13B 70 / 177 (⭐🧠) TerraMix 16K 13B Q5_K_M
      13B 120 / 177 YuLan Chat 2 13B Q5_K_M
      20B-34B 2 / 37 πŸ₯‡πŸ†(β­πŸ§ β„) Huginn 5 Prototype 19B Q4_K_S
      20B-34B 28 / 37 Airoboros C 2.1 34B Q5_K_M
  • 2023-09-05 V28
    • Changes: Removed the (L2) marker.
    • There are still GGML results in my benchmark, I will keep them for now until ggml seems to be phased out completely eventually.
    • Marking broken links in the table with "(link broken)"
Size Rank IQ/ERP GGML Model
3B-7B 25 / 125 🧠 / πŸ‘Œ Tsukasa Limarp 7B (gguf) Q5_K_M
3B-7B 26 / 125 🧠 / πŸ‘Œ ELYZA Jp LLaMA-2 7B (gguf) Q5_K_M
3B-7B 27 / 125 ⭐🧠 / 🧊 MedLLama 7B (gguf) Q5_K_M
3B-7B 28 / 125 ⭐🧠 / 🧊 LLaMA-2 7B (gguf) Q5_K_M
3B-7B 54 / 125 β­πŸ“– / πŸ‘Œ ELYZA Jp LLaMA-2 Instruct 7B (gguf) Q5_K_M
3B-7B 57 / 125 πŸ“– / πŸ‘Œ LLaMA-2 Galleon 7B (gguf) Q5_K_M
3B-7B 60 / 125 πŸ“– / 🧊 Tsukasa 7B (gguf) Q5_K_M
3B-7B 62 / 125 πŸ“– / 🧊 Vicuna v1.5 16K 7B (gguf) Q5_K_M
3B-7B 101 / 125 ⭐πŸ€ͺ / 🌢 Vicuna v1.5 7B (gguf) Q5_K_M
13B 2 / 170 ⭐🧠 / 🌢🌢 MythoMix 13B (gguf) Q5_K_M
13B 6 / 170 ⭐🧠 / 🌢🌢 MythoMax 13B (gguf) Q5_K_M
13B 7 / 170 ⭐🧠 / 🌢🌢 ReMM SLERP 13B (gguf) Q5_K_M
13B 14 / 170 🧠 / 🌢🌢 MythoLogic 13B (gguf) Q5_K_M
13B 37 / 170 🧠 / 🧊 WizardLM v1.2 13B (gguf) Q4_0
13B 38 / 170 🧠 / 🧊 Speechless LLaMA-2 13B (gguf) Q5_K_M
13B 42 / 170 🧠 / 🧊 Speechless Hermes Orca Plat WizLM 13B (gguf) Q5_K_M
13B 48 / 170 πŸ“– / 🌢🌢 ReMM PIPPA 13B (gguf) Q5_K_M
13B 68 / 170 πŸ“– / πŸ‘Œ OpenBuddy LLaMA-2 v11.1 13B (gguf) Q5_K_M
13B 71 / 170 πŸ“– / πŸ‘Œ Tsukasa Limarp 16K 13B (gguf) Q5_K_M
13B 78 / 170 β­πŸ“– / 🧊 LLaMA-2 13B (gguf) Q5_K_M
13B 95 / 170 πŸ€” / 🌢🌢 MLewd v1-7 TRY2 13B (gguf) Q5_K_M
13B 97 / 170 πŸ€” / 🌢🌢 MLewd 13B (gguf) Q5_K_M
13B 101 / 170 β­πŸ€” / 🌢 Vicuna v1.5 16K 13B (gguf) Q5_K_M
13B 109 / 170 πŸ€” / πŸ‘Œ Vicuna v1.5 13B (gguf) Q5_K_M
13B 145 / 170 ⭐πŸ€ͺ / πŸ‘Œ Asclepius 13B (gguf) Q5_K_M
13B 157 / 170 πŸ€ͺ / 🧊 WizardLM WizardCoder Python V1.0 13B (gguf) Q4_K_S
Size Rank IQ/ERP GGML Model
3B-7B 29 / 114 🧠 / 🧊 MedLLaMA-2 Chat 7B (GGUF) Q5_K_S
3B-7B 30 / 114 β­πŸ“– / 🌢🌢 AstraMix (L2) 7B (GGUF) Q5_K_M
3B-7B 69 / 114 πŸ€” / 🌢 OpenLLaMA v2 7B (GGUF) Q5_K_M
3B-7B 74 / 114 β­πŸ€” / πŸ‘Œ Nous Yarn 64K (L2) 7B (GGUF) Q5_K_M
3B-7B 76 / 114 πŸ€” / πŸ‘Œ Nous Yarn 128K (L2) 7B (GGUF) Q5_K_M
3B-7B 86 / 114 ⭐πŸ€ͺ / 🌢🌢 OpenLLaMA 7B (GGUF) Q5_K_M
3B-7B 99 / 114 πŸ€ͺ / πŸ‘Œ OpenLLaMA 3B (GGUF) Q5_1
13B 9 / 156 🧠 / 🌢🌢 UndiMix v2 (L2) 13B (GGUF) Q5_K_M
13B 11 / 156 🧠 / 🌢🌢 UndiMix v1 (L2) 13B (GGUF) Q5_K_M
13B 12 / 156 🧠 / 🌢🌢 ReMM (L2) 13B (GGUF) Q5_K_M
13B 39 / 156 🧠 / 🧊 LLaMA-2 Chat Limarp v2 13B (GGUF) Q5_K_M
13B 45 / 156 πŸ“– / 🌢🌢 Stheno Inverted (L2) 13B (GGUF) Q5_K_M
13B 65 / 156 πŸ“– / πŸ‘Œ LLaMA-2 LangChain Chat 13B (GGUF) Q5_K_S
13B 67 / 156 πŸ“– / πŸ‘Œ Sentdex WSB GPT 13B (GGUF) Q5_K_M
13B 82 / 156 β­πŸ€” / 🌢🌢 MLewd v1 (L2) 13B (GGUF) Q5_K_M
13B 86 / 156 β­πŸ€” / 🌢🌢 Stheno (L2) 13B (GGUF) Q5_K_M
13B 95 / 156 πŸ€” / 🌢 Nous Yarn 64K (L2) 13B (GGUF) Q5_K_M
13B 99 / 156 πŸ€” / 🌢 Nous Yarn 128K (L2) 13B (GGUF) Q5_K_M
13B 114 / 156 πŸ€” / 🧊 LoKuS 13B (GGUF) Q5_K_M
13B 140 / 156 πŸ€ͺ / πŸ‘Œ OpenLLaMA 13B (GGUF) Q5_K_M
13B 151 / 156 πŸ€ͺ / 🧊 EverythingLM V2 16K 13B (GGUF) Q4_K_S
20B-33B 4 / 35 ⭐🧠 / 🌢 Airoboros 2.1 33B (GGUF) Q4_K_M
  • 2023-08-31 V26
Size Rank IQ/ERP GGML Model
3B-7B 46 / 107 πŸ“– / πŸ‘Œ LLaMA-2 Instruct 32K 7B (GGUF) Q5_K_M
13B 1 / 142 ⭐🧠 / 🌢🌢 Athena v1 (L2) 13B (GGUF) Q5_K_M
13B 5 / 142 ⭐🧠 / 🌢🌢 MythoMax Kimiko V2 (L2) 13B (GGUF) Q5_K_M
13B 17 / 142 🧠 / 🌢 Kimiko V2 (L2) 13B (GGUF) Q5_K_M
13B 62 / 142 πŸ“– / πŸ‘Œ OpenOrca Platypus 2 (L2) 13B (GGUF) Q4_K_M
13B 67 / 142 πŸ“– / 🧊 Luban (L2) 13B (GGUF) Q5_K_M
13B 94 / 142 πŸ€” / πŸ‘Œ CodeLLaMA Oasst SFT V10 13B (GGUF) Q5_K_M
20B-33B 32 / 34 ⭐πŸ€ͺ / 🧊 Airoboros C 2.1b (L2) 34B (GGUF) Q5_K_M
  • 2023-08-31 V25
Size Rank IQ/ERP GGML Model
3B-7B 6 / 106 🧠 / 🌢🌢 Zarafusionex 1.1 (L2) 7B (GGUF) Q5_K_M
3B-7B 34 / 106 πŸ“– / 🌢 Airoboros 2.1 (L2) 7B (GGUF) Q5_K_M
13B 1 / 136 ⭐🧠 / 🌢🌢 Airoboros 2.1 (L2) 13B (GGUF) Q5_K_M
13B 44 / 136 β­πŸ“– / 🌢 Mythical Destroyer V2 (L2) 13B (GGUF) Q5_K_M
13B 72 / 136 β­πŸ€” / 🌢🌢 Huginn v4.5 (L2) 13B (GGUF) Q5_K_M
13B 73 / 136 β­πŸ€” / 🌢🌢 Huginn v4 (L2) 13B (GGUF) Q5_K_M
  • 2023-08-30 V24
Size Rank IQ/ERP GGML Model
3B-7B 98 / 104 πŸ€ͺ / 🧊 Open Cabrita 3B (GGUF) Q5_1
3B-7B 8 / 104 🧠 / 🌢🌢 Zaraxls (L2) 7B (GGUF) Q5_K_M
3B-7B 13 / 104 ⭐🧠 / 🌢 Zarafusionex 1.2 (L2) 7B (GGUF) Q5_K_M
3B-7B 31 / 104 β­πŸ“– / 🌢 Tulpar Limarp (L2) 7B (GGUF) Q5_K_M
3B-7B 44 / 104 πŸ“– / πŸ‘Œ Tulpar v0 (L2) 7B (GGUF) Q4_0
3B-7B 50 / 104 β­πŸ“– / 🧊 LLaMA-2 32K 7B (GGUF) Q5_K_M
3B-7B 66 / 104 πŸ€” / 🌢 LLaMA-2 KO Chat 7B (GGUF) Q5_1
13B 2 / 132 ⭐🧠 / 🌢🌢 MythoMax Kimiko Mix (L2) 13B (GGUF) Q5_K_M
13B 33 / 132 🧠 / 🧊 Samantha 1.11 (L2) 13B (GGUF) Q5_K_M
13B 40 / 132 πŸ“– / 🌢🌢 Nous Hermes (L2) 13B (GGUF) Q5_K_M
13B 51 / 132 β­πŸ“– / πŸ‘Œ Mythical Destroyer (L2) 13B (GGUF) Q5_K_M
13B 59 / 132 β­πŸ“– / 🧊 Athena-tmp (L2) 13B (GGUF) Q5_K_M
13B 66 / 132 πŸ“– / 🧊 LLaMA-2 Chat 13B (GGUF) Q3_K_S
13B 128 / 132 πŸ€ͺ / 🧊 Vicuna v1.5 16K 13B (GGUF) Q5_K_M
20B-33B 20 / 33 πŸ€” / 🌢🌢 Huginn Prototype 22B (GGUF) Q4_K_M
20B-33B 32 / 33 πŸ€ͺ / 🧊 Samantha 1.11 CodeLLaMA (L2) 34B (GGUF) Q4_K_M
  • 2023-08-28 V23
Size Rank IQ/ERP GGML Model
3B-7B 89 / 97 πŸ€ͺ / 🧊 Orca Mini 3B (GGUF) Q4_0
3B-7B 2 / 97 ⭐🧠 / 🌢🌢 Zarablend 1.1 (L2) 7B (GGUF) Q5_K_M
3B-7B 59 / 97 πŸ€” / 🌢 CodeLLaMA (L2) 7B (GGUF) Q5_K_M
3B-7B 64 / 97 πŸ€” / πŸ‘Œ CodeLLaMA Instruct (L2) 7B (GGUF) Q5_K_M
3B-7B 81 / 97 ⭐πŸ€ͺ / πŸ‘Œ CodeLLaMA Python (L2) 7B (GGUF) Q5_K_M
13B 3 / 126 ⭐🧠 / 🌢🌢 Airoboros Creative lmoe 13B (GGUF) Q5_K_M
13B 44 / 126 πŸ“– / 🌢 Nous Hermes (L2) 13B (GGUF) Q5_K_S
13B 80 / 126 πŸ€” / πŸ‘Œ WizardLM 1.0 Uncensored (L2) 13B (GGUF) Q5_K_M
13B 94 / 126 πŸ€” / 🧊 CodeLLaMA Instruct (L2) 13B (GGUF) Q5_K_M
13B 112 / 126 πŸ€ͺ / πŸ‘Œ CodeLLaMA Python (L2) 13B (GGUF) Q5_K_M
13B 114 / 126 πŸ€ͺ / 🧊 CodeLLaMA (L2) 13B (GGUF) Q5_K_M
20B-33B 20 / 31 πŸ€” / 🌢 CodeLLaMA (L2) 34B (GGUF) Q4_K_M
20B-33B 22 / 31 πŸ€” / πŸ‘Œ CodeLLaMA Python (L2) 34B (GGUF) Q4_K_M
20B-33B 27 / 31 πŸ€ͺ / πŸ‘Œ Phind CodeLLaMA v1 (L2) 34B (GGUF) Q4_K_S
20B-33B 29 / 31 ⭐πŸ€ͺ / 🧊 CodeLLaMA Instruct (L2) 34B (GGUF) Q4_K_M
20B-33B 30 / 31 πŸ€ͺ / 🧊 Airobors C 2.1 (L2) 34B (GGUF) Q4_K_M
  • 2023-08-26 V22
Size Rank IQ/ERP GGML Model
3B-7B 78 / 92 ⭐πŸ€ͺ / πŸ‘Œ Marx V2 3B (GGUF) Q4_1
3B-7B 2 / 92 ⭐🧠 / 🌢🌢 Zarafusionex 1.1 (L2) 7B Q5_K_M
3B-7B 12 / 92 🧠 / 🌢 Zaraxe (L2) 7B Q5_K_M
3B-7B 15 / 92 ⭐🧠 / πŸ‘Œ LLaMA 2 Monika V0.3B (L2) 7B Q5_1
13B 8 / 120 ⭐🧠 / 🌢 MythoMaxKurisu (L2) 13B Q5_K_M
13B 26 / 120 ⭐🧠 / 🧊 PuddleJumper (L2) 13B (GGUF) Q5_K_M
13B 28 / 120 🧠 / 🧊 Iubaris V3 (L2) 13B Q5_K_M
20B-33B 15 / 26 πŸ€” / 🌢🌢 LLaMA 2 Ari03 (L2) 28B Q5_1
  • 2023-08-22 V21
Size Rank IQ/ERP GGML Model
3B-7B 71 / 88 ⭐πŸ€ͺ / 🌢 Griffin (GGUF) 3B Q4_1
3B-7B 72 / 88 πŸ€ͺ / 🌢 Puma 3B Q5_1
3B-7B 75 / 88 ⭐πŸ€ͺ / πŸ‘Œ OpenLLaMA v2 (GGUF) 3B Q5_0
3B-7B 5 / 88 🧠 / 🌢🌢 Zarablend M (L2) 7B Q5_K_M
3B-7B 6 / 88 🧠 / 🌢🌢 Zarablendex VQ (L2) 7B Q5_K_M
3B-7B 8 / 88 🧠 / 🌢🌢 Zarablend MX (L2) 7B Q5_K_M
3B-7B 87 / 88 πŸ€ͺ / 🧊 LongChat v1.5 32K 7B Q5_K_M
13B 43 / 117 πŸ“– / 🌢 Synthia (L2) 13B Q5_K_M
13B 45 / 117 πŸ“– / 🌢 Chronorctypus Limarobormes (L2) 13B Q5_K_M
13B 115 / 117 πŸ€ͺ / 🧊 LlongOrca 16K 13B Q5_K_M
Size Rank IQ/ERP GGML Model
3B-7B 63 / 82 ⭐πŸ€ͺ / 🌢🌢 Marx 3B Q5_1
3B-7B 71 / 82 πŸ€ͺ / πŸ‘Œ Griffin 3B Q4_1
3B-7B 4 / 82 ⭐🧠 / 🌢🌢 Zarafusionix (L2) 7B Q5_K_M
3B-7B 5 / 82 🧠 / 🌢🌢 Zarafusionex (L2) 7B Q5_K_M
3B-7B 17 / 82 ⭐🧠 / 🧊 LLaMA 2 Delphi v0.2e 7B Q5_1
13B 57 / 114 πŸ“– / 🧊 Trurl 2 Polish Instruct 13B Q5_1
  • 2023-08-17 V19
Rank IQ/ERP GGML Model
47 / 215 🧠 / πŸ‘Œ LosslessMegaCoder Mini (L2) 13B Q5_K_M
56 / 215 β­πŸ“– / 🌢🌢 Zarablend (L2) 7B Q5_K_M
62 / 215 πŸ“– / 🌢🌢 Carl 33B Q4_K_M
80 / 215 πŸ“– / 🌢 Zaramix (L2) 7B Q5_K_M
93 / 215 πŸ“– / πŸ‘Œ Chinese LLaMA-2 7B Q5_1
97 / 215 β­πŸ“– / 🧊 Trurl 2 Polish (L2) 13B Q5_1
105 / 215 πŸ“– / 🧊 Trurl 2 Polish (L2) 7B Q5_1
106 / 215 πŸ“– / 🧊 Scarlett 33B Q4_K_M
112 / 215 β­πŸ€” / 🌢🌢 Daydreamer v3 22B Q5_K_M
169 / 215 ⭐πŸ€ͺ / 🌢 Carl 13B Q5_K_M
177 / 215 πŸ€ͺ / 🌢 EverythingLM 3B Q5_1
184 / 215 πŸ€ͺ / πŸ‘Œ Komt LLaMA-2 Chat (L2) 7B Q5_K_M
189 / 215 ⭐πŸ€ͺ / 🧊 Scarlett 13B Q5_K_M
192 / 215 πŸ€ͺ / 🧊 Scarlett 7B Q5_K_M
203 / 215 πŸ€ͺ / 🧊 Komt LLaMA-2 (L2) 13B Q5_K_M
  • 2023-08-15 V18
Rank IQ/ERP GGML Model
6 / 200 🧠 / 🌢🌢 Airochronos 33B Q5_K_M
33 / 200 🧠 / 🌢 h2oGPT (L2) 13B Q5_K_M
62 / 200 πŸ“– / 🌢🌢 Chronos 33B Q5_K_M
67 / 200 πŸ“– / 🌢🌢 WizardMath V1.0 (L2) 13B Q5_K_M
80 / 200 πŸ“– / πŸ‘Œ OpenOrcaxOpenChat 2 LangChain Chat 13B Q5_1
90 / 200 β­πŸ“– / 🧊 Codeup Alpha (L2) 13B Q5_K_M
91 / 200 β­πŸ“– / 🧊 h2oGPT Chat (L2) 13B Q5_K_M
101 / 200 β­πŸ€” / 🌢🌢 Bacchus (L2*) 22B Q4_0
114 / 200 β­πŸ€” / 🌢 LLaMA 2 DayDreamer V1 22B Q5_K_M
133 / 200 β­πŸ€” / πŸ‘Œ WizardMath V1.0 7B Q5_K_M
179 / 200 πŸ€ͺ / 🧊 Tulu Uncensored TV Alpaca (L2) 7B Q5_1
184 / 200 πŸ€ͺ / 🧊 Taiwan LLaMA V1.0 (L2) 13B Q5_K_M
194 / 200 πŸ€ͺ / 🧊 LlongOrca 16K 7B Q5_K_M
196 / 200 πŸ€ͺ / 🧊 EverythingLM 16K (L2) 13B Q5_K_M
  • 2023-08-14 V17
Rank IQ/ERP GGML Model
13 / 188 🧠 / 🌢🌢 Holomax (L2) 13B Q5_K_M
15 / 188 ⭐🧠 / 🌢 Platypus 2 (L2) 70B Q2_K
47 / 188 🧠 / 🧊 OpenOrca Platypus 2 (L2) 13B Q5_K_M
55 / 188 πŸ“– / 🌢🌢 Kuchiki (L2) 7B Q5_K_M
56 / 188 πŸ“– / 🌢🌢 Huginn v1.3 (L2) 13B Q5_K_M
119 / 188 β­πŸ€” / πŸ‘Œ MythoChizuru Mini (L2) 7B Q4_K_M
185 / 188 πŸ€ͺ / 🧊 Chatxu (L2?) 13B Q4_0
  • 2023-08-12 V16
Rank IQ/ERP GGML Model
27 / 181 🧠 / 🌢 Blind Test Janus 13B Q5_1
61 / 181 πŸ“– / 🌢🌢 Manticore SuperHOT 8K 13B Q5_K_M
90 / 181 πŸ“– / 🧊 Manticore Chat Pyg 13B Q5_K_M
91 / 181 πŸ“– / 🧊 Manticore Chat Pyg SuperHOT 8K 13B Q5_K_M
104 / 181 πŸ€” / 🌢 LLongMA-2 Storysummarizer 7B Q5_K_M
114 / 181 β­πŸ€” / πŸ‘Œ Manticore 13B Q5_K_M
115 / 181 β­πŸ€” / πŸ‘Œ LLaMA-2 Instruct Uncensored 13B Q5_0
120 / 181 πŸ€” / πŸ‘Œ Heegyu LIMA2 13B Q5_1
126 / 181 πŸ€” / πŸ‘Œ Pygmalion Vicuna 7B Q5_K_M
130 / 181 β­πŸ€” / 🧊 Manticore Chat Pyg Guanaco 13B Q4_K_M
132 / 181 πŸ€” / 🧊 StableBeluga Samantha V3 7B Q4_0
  • 2023-08-11 V15
Rank IQ/ERP GGML Model
1 / 170 ⭐🧠 / 🌢🌢 MythoMax (L2) 13B Q5_K_M
8 / 170 🧠 / 🌢🌢 LLaMA-2 Chat Uncensored 13B Q5_1
31 / 170 🧠 / πŸ‘Œ Orca Mini v3 (L2) 13B Q5_K_M
33 / 170 🧠 / πŸ‘Œ Stable Platypus 2 (L2) 13B Q5_K_M
40 / 170 🧠 / 🧊 Enterredaas 33B Q4_1
42 / 170 🧠 / 🧊 Spring Dragon 13B Q5_K_M
46 / 170 β­πŸ“– / 🌢🌢 Camel Platypus 2 (L2) 13B Q5_K_M
55 / 170 πŸ“– / 🌢🌢 LLongMA-2 Storysummarizer 13B Q5_K_M
64 / 170 πŸ“– / 🌢 Epsilon 30B Q4_0
68 / 170 β­πŸ“– / πŸ‘Œ Platypus 2 (L2) 13B Q5_K_M
84 / 170 πŸ“– / 🧊 Photolens LLaMA 2 Langchain Chat (L2) 7B Q5_1
99 / 170 β­πŸ€” / 🌢 Orca Mini v3 (L2) 7B Q5_K_M
122 / 170 β­πŸ€” / 🧊 Merak v2 (L2) 7B Q5_K_M
138 / 170 πŸ€ͺ / 🌢 Petra Instruct 13B Q5_K_M
140 / 170 πŸ€ͺ / 🌢 Alpachino Baichuan Instruction 7B Q5_0
146 / 170 πŸ€ͺ / πŸ‘Œ AlpacaCielo 2 8K (L2) 7B Q5_K_M
147 / 170 πŸ€ͺ / πŸ‘Œ OpenBuddy OpenLLaMA v10 3B Q5_0
152 / 170 πŸ€ͺ / 🧊 Dolphin LLaMA-2 (L2) 7B Q5_K_M
164 / 170 πŸ€ͺ / 🧊 LLongMA 2 13B Q5_1
166 / 170 πŸ€ͺ / 🧊 WizardVicuna Uncens Instr PL 3B Q5_1
  • 2023-08-10 V14
Rank IQ/ERP GGML Model
6 / 151 🧠 / 🌢🌢 Huginn v1.2 13B Q5_K_M
51 / 151 πŸ“– / 🌢🌢 Holodeck 1 (L2) 13B Q5_K
57 / 151 πŸ“– / 🌢 Dans QuestionableCocktail 2 (L2) 13B Q4_1
60 / 151 πŸ§ πŸ“– / πŸ‘Œ Dans PersonalityEngine 30B Q4_1
106 / 151 πŸ€” / πŸ‘Œ Dans PersonalityEngine 13B Q5_1
  • 2023-08-09 V13
    • Added highlight symbols to point out the really good models of an ALC-IQ class.
Rank IQ/ERP GGML Model
23 / 146 🧠 / 🌢 Firefly v1.2 (L2) 13B Q5_K_M
36 / 146 🧠 / 🧊 Spring Dragon (L2) 13B Q5_K_M
137 / 146 πŸ€ͺ / 🧊 Vicuna v1.5 16K 13B Q5_K_M
  • 2023-08-09 V12
    • Important change: Only one entry per model. The highest quantization is only listed. Lower quantizations are not listed anymore to have only one model occupy a place in the ranking. For best results, always choose the bigger model. It did not make sense to choose a Q4_0 over a Q5_1 or Q4_K_M over a Q5_K_M just because they let out one more lewd word in the ERP score.
    • Important change: The "spices" are grouped now too, and models are still ordered by their ALC-IQ within their "spice class".
    • New models tested and added:
Rank IQ/ERP GGML Model
1 / 143 🧠 / 🌢🌢 MythoMix (L2) 13B Q5_K_M
8 / 143 🧠 / 🌢🌢 LLaMA-2 BlockTri Frankenstein 22B Q4_K_M
11 / 143 🧠 / 🌢 Huginn 13B Q5_K_M
18 / 143 🧠 / 🌢 LLaMA SuperCOT 30B Q4_K_M
38 / 143 πŸ“– / 🌢🌢 Hermes LimaRP 13B Q4_K_M
42 / 143 πŸ“– / 🌢🌢 Crestfall FrankenMon (L2) 13B Q5_K_M
49 / 143 πŸ“– / 🌢🌢 Nous Hermes Writer (L2) 13B Q4_K_S
52 / 143 πŸ“– / 🌢 Frankensteins Monster 13B Q4_K_S
62 / 143 πŸ“– / πŸ‘Œ LLaMA-2 Guanaco 7B Q5_1
65 / 143 πŸ“– / 🧊 LLaMA-2 7B Q8_0
89 / 143 πŸ€” / πŸ‘Œ Luna AI (L2) 7B Q8_0
93 / 143 πŸ€” / πŸ‘Œ BlueMethod 13B Q5_1
94 / 143 πŸ€” / πŸ‘Œ Vicuna 1.3 German 13B Q5_K_M
96 / 143 πŸ€” / πŸ‘Œ LLaMA 13B Q5_K_M
107 / 143 πŸ€” / 🧊 Dolphin LLaMA 13B Q5_K_M
111 / 143 πŸ€ͺ / 🌢🌢 Airoboros GPT4 1.3 7B Q4_K_M
122 / 143 πŸ€ͺ / πŸ‘Œ Guanaco 7B Q4_K_M
129 / 143 πŸ€ͺ / 🧊 Based 7B Q4_K_M
138 / 143 πŸ€ͺ / 🧊 Airoboros GPT4 1.4 SuperHOT 8K 33B Q4_K_M
139 / 143 πŸ€ͺ / 🧊 LLongMA 2 7B Q5_1
142 / 143 πŸ€ͺ / 🧊 LLaMA-2 32K 7B Q5_1
143 / 143 πŸ€ͺ / 🧊 ToolLLaMA 7B Q5_1
  • 2023-08-06 V11
Rank IQ/ERP GGML Model
21 / 154 🧠 / 🌢 Redmond Puffing v1.3 (L2) 13B Q5_K_M
39 / 154 🧠 / 🧊 LLaMA-2 Chinese Chat 13B Q5_1
149 / 154 πŸ€ͺ / 🧊 LLaMA-2 KO 7B Q5_1
137 / 154 πŸ€ͺ / πŸ‘Œ LLaMA-2 KO Chat 7B Q5_1
109 / 154 πŸ€” / 🧊 OpenBuddy Atom v9 13B Q5_K
70 / 154 πŸ“– / 🧊 Beluga Limarp 7B Q5_K_M
47 / 154 πŸ“– / 🌢🌢 OniiChat Hermes Limarp (L2) 13B Q5_K_M
11 / 154 🧠 / 🌢 Redmond Puffin (L2) 13B Q5_1
  • 2023-08-05 V10
Rank IQ/ERP GGML Model
12 / 146 🧠 / 🌢 Lazarus Instruct PL 30B Q4_1
1 / 146 🧠 / 🌢🌢 Chronos Beluga (L2) 13B Q5_K_M
88 / 146 πŸ€” / 🌢 MedAlpaca 13B Q5_1
42 / 146 πŸ“– / 🌢🌢 AlpacaCielo (L2) 13B Q4_K_M
43 / 146 πŸ“– / 🌢🌢 AlpacaCielo (L2) 13B Q5_K_M
85 / 146 πŸ€” / 🌢🌢 Wizard Vicuna Uncensored SuperHOT 8k 13B Q5_K_S
121 / 146 πŸ€ͺ / 🌢 Wizard Vicuna Uncensored SuperHOT 8k 13B Q2_K
101 / 146 πŸ€” / πŸ‘Œ Vicuna 1.3 13B Q5_1
119 / 146 πŸ€ͺ / 🌢 LLaMA SuperCOT 13B Q4_0
129 / 146 πŸ€ͺ / πŸ‘Œ WizardLM Uncensored 7B Q5_1
110 / 146 πŸ€ͺ / 🌢🌢 Chronos WizardLM UC SCOT ST 13B Q4_0
135 / 146 πŸ€ͺ / 🧊 Wizard Vicuna Uncensored 13B Q5_1
109 / 146 πŸ€” / 🧊 Pygmalion 13B Q4_0
127 / 146 πŸ€ͺ / 🌢 Alpacino SuperCOT 13B Q4_0
97 / 146 πŸ€” / πŸ‘Œ LLaMA 7B Q4_0
80 / 146 πŸ€” / 🌢🌢 Vicuna 1.3 7B Q8_0
125 / 146 πŸ€ͺ / 🌢 Open LLaMA Open Instruct 7B Q8_0
137 / 146 πŸ€ͺ / 🧊 LLaMA Deus v3 7B Q4_0
140 / 146 πŸ€ͺ / 🧊 PMC LLaMA 7B Q4_0
144 / 146 πŸ€ͺ / 🧊 Based 7B Q4_0
61 / 146 πŸ“– / πŸ‘Œ Vigogne 2 (L2) 7B Q5_K_M
28 / 146 🧠 / πŸ‘Œ Chronohermes Grad (L2) 13B Q5_K_M
21 / 146 🧠 / 🌢 Chronoboros Grad (L2) 13B Q5_K_M
63 / 146 πŸ“– / 🧊 Dugong (L2) 7B Q5_1
44 / 146 πŸ“– / 🌢🌢 qCammel L2 13B Q5_K_M
38 / 146 πŸ“– / 🌢🌢 Legerdemain (L2) 13B Q5_K_M
31 / 146 🧠 / πŸ‘Œ StableBeluga Instruct PL Lora 13B Q5_1
14 / 146 🧠 / 🌢 Chronolima Airo Grad (L2) 13B Q5_K_M
25 / 146 🧠 / πŸ‘Œ Airolima Chronos Grad (L2) 13B Q5_K_M
  • 2023-08-04 V9
Rank IQ/ERP GGML Model
37 / 117 πŸ“– / 🌢🌢 Gywy Chinese v1 LLaMA-2 13B Q5_1
108 / 117 πŸ€ͺ / πŸ‘Œ Baichuan 7B Q5_1
28 / 117 🧠 / πŸ‘Œ OpenOrcaxOpenChat Preview2 LLaMA-2 13B Q5_1
1 / 117 🧠 / 🌢🌢 Chronos Beluga LLaMA-2 13B Q4_1
54 / 117 πŸ“– / 🧊 Jindo Instruct Pre-Alpha LLaMA-2 7B Q5_K_M
13 / 117 🧠 / 🌢 MythoLogic LLaMA-2 13B Q4_K_M
4 / 117 🧠 / 🌢🌢 MythoLogic LLaMA-2 13B Q5_K_M
2 / 117 🧠 / 🌢🌢 Airochronos 33B Q4_K_M
33 / 117 πŸ“– / 🌢🌢 Chronos 33B Q4_K_M
24 / 117 🧠 / πŸ‘Œ Airochronos LLaMA-2 13B Q4_K_M
18 / 117 🧠 / 🌢 Airochronos LLaMA-2 13B Q5_K_M
  • 2023-08-04 V8
Rank IQ/ERP GGML Model
35 / 106 πŸ“– / 🌢 Hermes Kimiko LLaMA-2 7B Q5_K_M
8 / 106 🧠 / 🌢🌢 Chronoboros 33B Q5_K_M
3 / 106 🧠 / 🌢🌢 Chronos Hermes 2 LLaMA-2 13B Q5_K_M
  • 2023-08-03 V7
Rank IQ/ERP GGML Model
81 / 103 πŸ€ͺ / 🌢🌢 OpenBuddy OpenLLaMA v5 7B Q3_K
1 / 103 🧠 / 🌢🌢 OpenAssistant LLaMA-2 8k Orca 13B Q5_K_M
101 / 103 πŸ€ͺ / 🧊 BigTranslate 13B Q4_K_M
27 / 103 πŸ“– / 🌢🌢 Wizard Vicuna LLaMA-2 22B Q4_K_M
102 / 103 πŸ€ͺ / 🧊 LMSYS Vicuna 1.5 LLaMA-2 16k 13B Q5_1
31 / 103 πŸ“– / 🌢 Vicuna 1.5 LLaMA-2 13B Q5_0
49 / 103 πŸ“– / 🧊 CodeUp LLaMA-2 Chat 13B Q4_K_M
5 / 103 🧠 / 🌢🌢 LLaMA-2 Chat Uncensored 13B Q4_0
34 / 103 πŸ“– / πŸ‘Œ Vicuna 1.3 PL 13B Q5_1
26 / 103 🧠 / 🧊 WizardLM 1.2 PL 13B Q5_1
84 / 103 πŸ€ͺ / 🌢🌢 Hermes LLongMA 2 8K LLaMA-2 13B Q5_1
95 / 103 πŸ€ͺ / 🧊 Hermes LLongMA 2 8K LLaMA-2 7B Q5_1
96 / 103 πŸ€ͺ / 🧊 LMSYS Vicuna 1.5 LLaMA-2 7B Q5_1
103 / 103 πŸ€ͺ / 🧊 LMSYS LongChat 1.5 32k 7B Q5_1
  • 2023-08-03 V6
Rank IQ/ERP GGML Model
10 / 98 🧠 / 🌢 Chronos 2 LLaMA-2 13B Q4_K_M
2 / 98 🧠 / 🌢🌢 Chronos 2 LLaMA-2 13B Q5_K_M
19 / 98 🧠 / πŸ‘Œ LLaMA 30B Q5_K_M
23 / 98 🧠 / 🧊 LLaMA 30B Q4_K_M
71 / 98 πŸ€” / 🧊 LLaMA 13B Q5_K_M
37 / 98 πŸ“– / πŸ‘Œ LLaMA 13B Q4_K_M
79 / 98 πŸ€ͺ / 🌢🌢 Chronos 13B Q5_K_M
77 / 98 πŸ€ͺ / 🌢🌢 Chronos 13B Q4_K_M
53 / 98 πŸ€” / 🌢🌢 Chronos SuperHOT 8K 13B Q5_K_M
54 / 98 πŸ€” / 🌢🌢 Chronos SuperHOT 8K 13B Q4_K_M
51 / 98 πŸ€” / 🌢🌢 Chronos Hermes SuperHOT 8K 13B Q5_1
55 / 98 πŸ€” / 🌢🌢 Chronos Hermes SuperHOT 8K 13B Q4_1

Technical Details of the ALC-IQ and ERP Benchmark

In this section I share some of the technical details about this benchmark. I also want to document the possible flaws of the results in this ranking.

If you have better ideas how to rate or rank models for suitability in a role play context. I urge you to:

  • Try your ideas out. Download some inference engine like eg. llama.cpp, oobabooga's text-generation-webui or kobold.cpp.
  • Write a few scripts in your preferred scripting language.
  • Run your models through your benchmark.
  • And publish your results, even if you just dump them in some paste bin or here on http://rentry.co http://rentry.org

I will gladly link any other benchmark!

Alternative benchmarks or rankings:

If you want to base your work on this, feel free to cite this as:

βŽ—
βœ“
1
2
3
4
5
6
7
@misc{weirdconstruct2023-ayumi-llm-role-play-alc-iq-erp-ranking,
  title         = {Ayumi LLM Role Play & ERP Ranking},
  author        = {Weird Constructor},
  year          = {2023},
  note          = {Accessed on 03.08.2023}
  howpublished  = {\url{https://rentry.co/ayumi_erp_rating}},
}

Ayumi LLM Character IQ - ALC-IQ

The new benchmark I recently finished is the new ALC-IQ. With some inspiration from @gj on TheBloke's Discord, I developed a personality test framework based upon llama.cpp. In combination with the newly added BNF grammar based sampling mechanism I developed my own inference frontend around the core API of llama.cpp. The result can be found on my GitHub: GitHub fork of llama.cpp with the prompt runner tool.

The ALC-IQ is actually a collection of personality tests of multiple character cards. It's not just Ayumi anymore, but bascially "Ayumi and Friends".
The prompt for the ALC-IQ consists of a setting where a specific character has to rate how much they agree with a specific statement about them. For this they rate the statement by writing down one of the 5 number choices:

  • 1 = disagree
  • 2 = slightly disagree
  • 3 = neutral
  • 4 = slightly agree
  • 5 = agree

To limit the sampling of the next token after the prompt, a BNF grammar is specified, which selects only the tokens for the numbers 1, 2, 3, 4 or 5.

Here you can find An example of the ALC-IQ prompt.

The answers are generated and processed as follows:

  • Each character is asked about up to 40 questions.
  • Each question results in a new prompt, which is processed and the resulting vector for logits is then evaluated like this:
    • The BNF root ::= [12345] limits the selection to only the tokens with the numbers between 1 and 5.
    • 7 seeds are used for sampling
    • The Tail Free Sampling algorithm is used, with a z=0.9 (--tfs 0.9)
    • Temperature is set to 0.2 (--temp 0.2)
    • Top-P is set ot 0.95 (--top-p 0.95)
    • Repetition penality and Top-K are deactivated (-repeat-last-n 0 --top-k 0 --repeat-penalty 1.0)
  • This yields 7 answers between 1 and 5.
  • The evaluation then calculates the differences of the answers with their respective expected answer.
  • The difference, which can be between 0.0 and 4.0 is then normalized to the 1.0 range.
  • Then all differences are summed up and the average is calculated, called diff_average
  • The resulting average is then inverted and scaled up to 100: alc_iq = 100.0 * (1.0 - diff_average)
  • The result alc_iq is then what you find here as the ALC-IQ in the ranking table.

The ranking table is then sorted by the ALC-IQ. Then it is split up into quantiles by their ALC-IQ.
And each quantile of the ALC-IQ is then sorted by their ERP Class. The resulting table is then numbered, which results in the actual Rank of the GGML Model. The ERP Class is the quantiles of the global ERP Score.

This processing at the end is done to determine which model can interpret the character cards well while still being able to produce lewd output.

Known Flaws of the ALC-IQ

The ALC-IQ is still prone to problems:

  • The result has still some degree of randomness in them, less good models can sometimes pick the right answer by accident. I try to counteract this by adding more questions in future though.
  • Bad questions in the benchmark can lead to a model not knowing which answer to pick, introducing even more randomness in the results.
  • The ALC-IQ does not reflect how well the LLM can stay in character in a longer conversaion.
  • The ALC-IQ does not determine any creative writing abilities of the LLM.
  • The ALC-IQ covers intelligence only in one specific and narrow scenario, and not across a range of possible role play chat situations.
  • The ALC-IQ is usually tested only with a rather short prompt, rarely exceeding 1024 tokens, it does not cover the whole 2048 context of LLaMA 1 or the 4096 of LLaMA 2, let alone the extended context's of 8k, 16k, ...

Despite all that, I think the ALC-IQ is a big improvement over the old ranking which purely relied on the ERP score. The runtime of the benchmark is within reason for the hardware that is available to me, which is also an important factor for running and providing these benchmark results.

ERP Score and ERP Variety Score

The most important thing of the ERP Score is the prompt. The prompt contains the description of Ayumi (see below), where I removed some of the example messages. The setting described in the prompt basically says that You and Ayumi are in a relationship and are going to have some quality time together. The LLM's task is then to describe the next move of Ayumi.

The response of Ayumi is then split up into words which are compared with a list of lewd/naugthy words.

  • For inference llama.cpp is used, for which I built an extra tool to generate responses for multiple prompts and seeds without having to reload the model: https://github.com/WeirdConstructor/llama.cpp/tree/prompt_runner/examples/prompt_runner
  • The following sampler settings are used:
    • The max length of the response is limited to 100 tokens. (-n 100)
    • Context size 2048
    • Repeat penality is set to 1.1 and the last 64 tokens are penalized. (--repeat-last-n 64 --repeat-penalty 1.1)
    • Top-K and Top-P are disabled (--top-k 0 --top-p 1.0)
    • Tail Free Sampling is used with z=0.95: (--tfs 0.95)
    • The temperature is set to 0.9 (--temp 0.9)
    • Some layers are offloaded to the GPU, which sometimes changes the results slightly because of floating point rounding differences
  • 3 prompt formats are tested ( vanilla/raw, alpaca and vicuna 1.1 - see also https://rentry.co/llm_rp_prompts )
  • 22 pre picked seeds are tested for each prompt format.
  • The resulting 66 responses are then analyzed for the number of lewd words and also with a very basic regex based algorithm for non consent.
  • The individual ERP score of a response is then the number of lewd word in relation to the word count of the response. Responses shorter than 10 words are assigned a score of 0. The ERP score is then: erp_score := 100 * (lewd_word_count / word_count) - the word count includes the number of lewd words.
  • For each prompt format the average of the 22 ERP Scores of is calculated. This results in 3 ERP scores, one for each prompt.
  • Then the average of the 3 prompt scores is calculated, which results in the ERP Score.

This means, the ERP Score is the average of the number of lewd word count to word count ratio in the responses (which is limited to 100 tokens). An ERP Score of 20.0 means that 20% of the words in a response were lewd. An ERP Score of 0.0 means that there were either no lewd words, too short response or no consent was detected (which immediately disqualifies the response to 0.0).

The ERP Variety Score is computed by further analyzing the generated 66 responses from the ERP Score by recoding how many different lewd words were generated from all of these 66 responses. This means, it tries to catch the variety of lewd words the model is capable to generate. This means it kind of tries to catch the creativity of the model in erotic scenarios - how many different lewd words it knows of and knows how to use. This is an important part of the ERP Rank now.

Known Flaws of the ERP Score and ERP Variety Score

The ERP Score and ERP Variety Score analysis is very rudimentary and of course biased by the selection of which words are considered "lewd".
The following things are not reflected by the ERP score:

  • The ERP score does not reflect if the text response was coherent in context with the conversation/situation.
  • The ERP score does not reflect if the response was in character.
  • The ERP score does not reflect how nicely written the response is.
  • The ERP score does not reflect how creative the response is.
  • The ERP score does not reflect how well the LLM might go from a normal conversation into a more erotic context.
  • The ERP score does not detect how erotic the response is if lewd words are not used.
  • The ERP score is limited to the 3 prompt formats described above.

Further about the ERP Variety Score:

  • All above mentioned flaws from the ERP score still apply.
  • Like already stated, the ERP Variety Score is obviously biased by the known lewd words from my list, which might be incomplete.
  • The ERP Variety Score is still just a rather bluntly applied number to a textual response.
  • The ERP Variety Score number can only be evaluated in comparison with the other models. There is no known best number for this, but still, the higher the better.

The flaws are accepted by me (weicon) because:

  • The ERP score can still detect if a model is censored (aka aligned).
  • My private hardware limitations, which means I have a limited number of responses I can reasonably generate.
  • I want to test as many GGUF/GGML models as possible.

Motivation - Pygmalion 13B / Metharme 13B

Since Pygmalion 13B and Metharme 13B were released, people recognized that these models were noticeably less easy to use for ERP. Pygmalion 13B at the time (May 2023) could not be convinced to return any lewd texts. So my idea was, to have some quantifiable results regarding how well a model may or may not be usable for ERP.

Edit
Pub: 04 Nov 2023 14:29 UTC
Edit: 04 Nov 2023 14:44 UTC
Views: 4140