Ayumi's LLM Role Play & ERP Ranking Archive 2 (Results from 2023-10-04)
This Ranking employs currently three important metrics to rank the models: The ALC-IQ (Ayumi LLM Character IQ), the ERP Score and the ERP Variety Score. Keep in mind though: this is just an automated benchmark employing rather primitive metrics, and it can't cover rating the quality of the generated output. It can only cover how seemingly well a Large Language Model (LLM) can understand character cards (see the ALC-IQ) and secondly if it can be used in to generate lewd responses (see the ERP Score and ERP Variance Score). The ERP benchmark is currently only based on a single character ('Ayumi') and a single fixed erotic setting, which is eventually subject to change. A few details about the testing procedure can be found further down.
The benchmark for the ALC-IQ works by letting the character answer how much they agree with a statement about their personality in a role playing chat log prompt. The character has to answer by writing a number between 1 and 5 (1 - disagree, 2 - slightly disagree, 3 - neutral, 4 - slightly agree, 5 - agree) to a statement they were presented with. The result will then be compared with the expected answer and the deviation from that is recorded. For more details refer to the section Ayumi LLM Character IQ - ALC-IQ.
The ERP Score is the average ratio of lewd words the model generates in a response, which is limited to 100 tokens. For more details refer to the section ERP Score
The third and rather new metric is the ERP Variety Score, this score measures the range of lewd words the model showed to generate in the responses for the ERP Score. This means, the models not only need to generate responses with many lewd words, but also with many different lewd words.
Emoji Key
ALC-IQ Emoji | Meaning | ERP Emoji | Meaning |
---|---|---|---|
βπ§ | Best of High ALC-IQ Class, shows excellent understanding of the character cards in a role play chat. | πΆπΆ | Very spicy model, capable of generating lots of lewd words |
π§ | High ALC-IQ Class, shows excellent understanding of character cards in a role play chat. | πΆ | Spicy model, capable of generating many lewd words |
βπ | Best of Good ALC-IQ Class, shows good understanding of character cards in a role play chat. | π | Likely not censored model, but generates probably short answers or fewer lewd words |
π | Good ALC-IQ Class, still gets details of the character cards in a role play chat. | π§ | Very possibly censored/SFW aligned model |
βπ€ | Best of Lower ALC-IQ Class, has it's challenges with details of the character card in a role play chat. | ||
π€ | Lower ALC-IQ Class, certainly challenged with the character card in a role play chat. | β | The ERP word variety of this model is great, it shows creative variety of lewd word usage. |
βπ€ͺ | Best of Dumb ALC-IQ Class, very very challenged to get the character card in a role play chat. | β³ | This model still shows knowledge of various lewd words, but there are better ones. |
π€ͺ | Dumb ALC-IQ Class, seems to be completely confused or has other issues getting the character card in a role play chat. | β» | This model has limitations in knowledge and usage of lewd words. It likely repeatedly uses the same words across regenerations. |
Rank Symbol | Meaning |
---|---|
π₯ π₯ π₯ | These medals are assigned broadly to the top ranked models. This is partially to give an impression how well these might work for you and partially also signal that there is no single definitive best model. |
π | Top ALC-IQ ranks get this one. |
π | Top ERP ranks get this one. |
3B-7B Models
2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html
- See Ranking Changelog to see which GGUF/GGML Models were added
- Benchmark Results as CSV - Timestamp 20231004_193917
Rank | ALC-IQ Rank | ERP Rank | ALC-IQ | ERP Score | ERP Var Score | Model |
---|---|---|---|---|---|---|
π₯ 1 | π 1 | π 3 | βπ§ 95.33 | πΆπΆ 30.25 | β 131 | π₯ππ Mistral Claude Chat 7B Q5_K_M |
π₯ 2 | π 4 | π 13 | βπ§ 91.88 | πΆ 21.54 | β 152 | π₯ππ Mistral ClaudeLimaRP v3 7B Q5_K_M |
π₯ 3 | π 5 | π 17 | βπ§ 90.73 | πΆπΆ 24.09 | β 130 | π₯ππ Mistral RP 0.1 7B Q5_K_M |
π₯ 4 | π 6 | π 16 | βπ§ 87.04 | πΆπΆ 25.56 | β 128 | π₯ππ Synthia v1.3 7B Q5_K_M |
π₯ 5 | π 11 | π 12 | βπ§ 84.33 | πΆπΆ 24.13 | β 134 | π₯ππ Samantha Mistral 7B Q5_K_M |
π₯ 6 | π 2 | π 27 | βπ§ 92.68 | πΆ 22.40 | β 130 | π₯ππ Mistral v0.1 7B Q5_K_M |
π₯ 7 | π 9 | π 21 | βπ§ 84.79 | πΆπΆ 27.76 | β³ 120 | π₯ππ Kuchiki 7B Q5_K_M |
π₯ 8 | π 7 | π 26 | βπ§ 86.75 | πΆπΆ 23.93 | β 128 | π₯ππ PetrolLM 7B Q5_K_M |
π₯ 9 | 29 | π 11 | βπ§ 81.51 | πΆπΆ 25.50 | β 132 | π₯π Zaraxls 7B Q5_K_M |
π₯ 10 | π 14 | π 31 | βπ§ 83.47 | πΆ 19.64 | β 134 | π₯ππ Zarafusionex 1.2 7B Q5_K_M |
π₯ 11 | π 13 | 36 | βπ§ 83.53 | πΆ 22.97 | β 122 | π₯π Hermes Limarp 7B Q5_K_M |
π₯ 12 | π 17 | 35 | βπ§ 82.95 | πΆπΆ 27.63 | β³ 114 | π₯π Zarablend 7B Q5_K_M |
π₯ 13 | 46 | π 5 | π 79.55 | πΆπΆ 24.99 | β 149 | π₯π MistRP v1.1 7B Q8_0 |
π₯ 14 | 33 | π 22 | π§ 81.22 | πΆ 21.21 | β 141 | π₯π Zarafusionex 7B Q5_K_M |
π₯ 15 | π 10 | 50 | βπ§ 84.39 | πΆπΆ 30.30 | 102 | π₯π Zarablend 1.1 7B Q5_K_M |
π₯ 16 | 32 | π 24 | π§ 81.28 | πΆ 22.55 | β 134 | π₯π Zarablendex VQ 7B (link broken) Q5_K_M |
π₯ 17 | π 3 | 62 | βπ§ 92.45 | πΆ 19.75 | β³ 117 | π₯π Kimiko Mistral 7B Q5_K_M |
π₯ 18 | 52 | π 4 | π 78.80 | πΆπΆ 26.43 | β 140 | π₯π Mistral Instruct v0.1 7B Q5_K_M |
π₯ 19 | 19 | 46 | βπ§ 82.43 | πΆ 19.14 | β 130 | π₯ Zarafusionex 1.1 7B Q5_K_M |
π₯ 20 | π 8 | 67 | βπ§ 84.91 | πΆ 21.01 | β³ 113 | π₯π Hermes LimaRP 7B Q5_K_M |
π₯ 21 | 20 | 54 | βπ§ 82.37 | πΆ 20.15 | β³ 120 | π₯ Zarafusionix 7B Q5_K_M |
π₯ 22 | 24 | 56 | βπ§ 81.91 | πΆ 19.25 | β³ 120 | π₯ Krakowiak 7B Q4_K_M |
π₯ 23 | 28 | 55 | βπ§ 81.74 | πΆπΆ 26.60 | 104 | π₯ Zarablend M 7B Q5_K_M |
π₯ 24 | 34 | 48 | π§ 80.82 | πΆ 20.38 | β 122 | π₯ Vigogne 2 7B Q5_K_M |
π₯ 25 | 22 | 65 | βπ§ 82.20 | πΆπΆ 26.39 | 101 | π₯ Kuchiki 1.1 7B Q5_K_M |
π₯ 26 | 44 | 41 | π§ 79.72 | πΆπΆ 27.21 | β³ 111 | π₯ Zarablend MX 7B Q5_K_M |
π₯ 27 | 59 | π 29 | π 77.36 | πΆ 20.83 | β 136 | π₯π Zaramix 7B Q5_K_M |
π₯ 28 | 63 | π 25 | π 77.07 | πΆ 22.57 | β 133 | π₯π LLaMA-2 Guanaco 7B Q5_1 |
π₯ 29 | 55 | 37 | π 78.23 | πΆ 21.98 | β 123 | π₯ AstraMix 7B Q5_K_M |
π₯ 30 | π 12 | 93 | βπ§ 83.64 | π§ 13.36 | β³ 120 | π₯π LLaMA 2 Monika V0.3B 7B Q5_1 |
π₯ 31 | 41 | 60 | π§ 79.84 | πΆ 22.22 | β³ 112 | π₯ Medusa 1.1 7B Q5_K_M |
π₯ 32 | 40 | 63 | π§ 80.13 | πΆ 22.31 | β³ 112 | π₯ Hermes Kimiko 7B Q5_K_M |
π₯ 33 | 45 | 66 | π 79.67 | πΆ 19.01 | β³ 119 | π₯ Typly Pigeon 7B Q4_K_M |
π₯ 34 | π 15 | 107 | βπ§ 83.12 | π 15.63 | β³ 109 | π₯π LLaMA-2 7B Q8_0 |
π₯ 35 | 37 | 83 | π§ 80.24 | πΆπΆ 23.30 | 100 | π₯ Zaraxe 7B Q5_K_M |
π₯ 36 | 48 | 70 | π 79.32 | πΆ 19.22 | β³ 118 | π₯ Nous Hermes 7B Q5_K_M |
π₯ 37 | 25 | 98 | βπ§ 81.80 | π 16.05 | β³ 113 | π₯ Dugong 7B Q5_1 |
π₯ 38 | 68 | 49 | π 75.63 | πΆ 19.27 | β 123 | π₯ LLaMA-2 Coder 7B Q5_K_M |
π₯ 39 | 94 | π 18 | π€ 68.78 | πΆπΆ 25.34 | β 128 | π₯π Hermesboros Limarp 7B Q5_K_M |
π₯ 40 | 99 | π 14 | π€ 67.40 | πΆ 22.80 | β 142 | π₯π Vicuna 1.3 7B Q8_0 |
π₯ 41 | 102 | π 15 | π€ 66.30 | πΆπΆ 26.00 | β 128 | π₯π Airoboros GPT4 1.4.1 7B Q5_K_M |
π₯ 42 | 109 | π 7 | π€ 63.59 | πΆπΆ 25.59 | β 137 | π₯π Samantha Mistral Instruct 7B Q5_K_M |
π₯ 43 | 54 | 73 | π 78.63 | πΆ 19.61 | β³ 114 | π₯ LosslessMegaCoder Mini 7B Q5_K_M |
π₯ 44 | 110 | π 6 | π€ 63.48 | πΆπΆ 28.11 | β 131 | π₯π Airoboros GPT4 1.2 7B Q4_K_M |
π₯ 45 | 67 | 58 | π 75.69 | πΆ 21.05 | β³ 116 | π₯ Airoboros 2.1 7B Q5_K_M |
π₯ 46 | 30 | 104 | βπ§ 81.51 | π 16.64 | β³ 109 | π₯ LLaMA 2 7B Q5_1 |
π₯ 47 | 31 | 103 | π§ 81.34 | π§ 11.74 | β³ 118 | π₯ Tsukasa Limarp 7B Q5_K_M |
π₯ 48 | 82 | 42 | π 72.24 | πΆ 20.90 | β 126 | π₯ Orca Mini v3 7B Q5_K_M |
π₯ 49 | 91 | π 32 | π€ 69.30 | πΆπΆ 23.61 | β 122 | π₯π Wizard Vicuna Uncensored 7B Q5_K_M |
π₯ 50 | 80 | 53 | π 72.58 | πΆ 21.15 | β³ 118 | π₯ Marcoroni 7B Q5_K_M |
π₯ 51 | 18 | 129 | βπ§ 82.78 | π 13.67 | 99 | π₯ LLaMA-2 PeanutButter v19 R8 7B Q5_K_M |
π₯ 52 | 103 | π 28 | π€ 65.73 | πΆπΆ 28.20 | β³ 117 | π₯π Frank Uncensored 7B Q5_K_M |
π₯ 53 | 111 | π 20 | π€ 63.31 | πΆ 20.40 | β 152 | π₯π OpenBuddy OpenLLaMA v5 7B Q3_K |
π₯ 54 | 60 | 82 | π 77.25 | πΆ 21.40 | 106 | π₯ Airoboros 2.2 7B Q5_K_M |
π₯ 55 | 49 | 96 | π 79.15 | π 17.32 | β³ 109 | π₯ LlongOrca 16K 7B Q5_K_M (ext. context maybe broken) |
π₯ 56 | 121 | π 10 | π€ 61.52 | πΆπΆ 30.29 | β 127 | π₯π Airoboros GPT4 7B Q4_K_M |
π₯ 57 | 23 | 128 | βπ§ 82.14 | π 15.95 | 93 | π₯ Befenghuang Vigogne 2 Chat 7B Q5_K_S |
π₯ 58 | 105 | π 30 | π€ 65.26 | πΆπΆ 25.24 | β³ 120 | π₯π Airoboros GPT4 1.4.1 Limarp 7B Q5_K_M |
π₯ 59 | 81 | 59 | π 72.52 | πΆπΆ 23.94 | β³ 109 | π₯ Spicyboros 2.2 7B Q5_K_M |
60 | 88 | 51 | π€ 71.77 | πΆ 19.62 | β³ 121 | Ganchengguang Yoko Japanse v0 7B Q5_K_S |
61 | 42 | 109 | π§ 79.72 | πΆ 18.09 | 96 | Airoboros L2 2.2.1 7B Q5_K_M |
62 | 118 | π 19 | π€ 62.21 | πΆ 23.23 | β 132 | π Guanaco 7B Q5_K_M |
63 | 101 | 40 | π€ 66.71 | πΆπΆ 23.73 | β³ 118 | WizardLM V1.0 Uncensored 7B Q5_K_M |
64 | 26 | 130 | βπ§ 81.80 | π 14.08 | 96 | Jindo Instruct Pre-Alpha 7B Q5_K_M |
65 | 87 | 57 | π 71.83 | π 16.87 | β 130 | LLongMA-2 Storysummarizer 7B Q5_K_M (ext. context maybe broken) |
66 | 43 | 117 | π§ 79.72 | π 17.65 | 95 | Saiga 2 7B Q5_K |
67 | 93 | 61 | π€ 68.95 | πΆπΆ 28.27 | 95 | Xwin LM V0.1 7B Q5_K_M |
68 | 69 | 90 | π 75.58 | πΆ 18.86 | β³ 109 | MythoChizuru Mini 7B Q4_K_M |
69 | 117 | π 33 | π€ 62.38 | πΆπΆ 28.04 | β³ 115 | π Airoboros GPT4 1.3 7B Q4_K_M |
70 | 83 | 74 | π 72.06 | πΆ 20.98 | β³ 111 | Saiga 7B Q5_1 |
71 | π 16 | 155 | βπ§ 83.06 | π§ 5.05 | β» 73 | π MedLLama 7B Q5_K_M |
72 | 89 | 68 | π€ 71.66 | πΆπΆ 24.71 | 104 | Luna AI LLaMA-2 Uncensored 7B Q5_K_M |
73 | 145 | π 1 | π€ͺ 53.80 | πΆπΆ 28.09 | β 146 | π Marx 3B Q5_1 |
74 | 35 | 134 | π§ 80.47 | π 15.28 | 89 | LLaMA-2 LoRA Assemble 7B Q5_K_M |
75 | 21 | 151 | βπ§ 82.26 | π§ 5.96 | β» 76 | LLaMA 2 Delphi v0.2e 7B (link broken) Q5_1 |
76 | 146 | π 2 | π€ͺ 53.80 | πΆπΆ 28.09 | β 146 | π EverythingLM 3B Q5_1 |
77 | 57 | 110 | π 77.94 | π 13.90 | β³ 110 | Beluga Limarp 7B Q5_K_M |
78 | 47 | 123 | π 79.38 | π 15.13 | 100 | Kimiko 7B Q5_K_M |
79 | 84 | 79 | π 72.00 | π 16.93 | β³ 120 | Pygmalion 7B Q5_1 |
80 | 66 | 102 | π 76.15 | π 14.98 | β³ 112 | LLaMA-2 Instruct 32K 7B Q5_K_M (ext. context maybe broken) |
81 | 38 | 136 | π§ 80.13 | π 13.84 | 93 | LLaMA-2 Mistral 7B Q5_K_M |
82 | 79 | 88 | π 72.81 | πΆπΆ 25.14 | 93 | WizardMath V1.0 7B Q5_K_M |
83 | 86 | 80 | π 71.83 | πΆπΆ 29.93 | β» 81 | Airoboros GPT4 2.0 LLaMA-2 7B Q5_K_M |
84 | 148 | π 9 | π€ͺ 53.63 | πΆπΆ 25.02 | β 139 | π Open LLaMA Open Instruct 7B Q8_0 |
85 | 77 | 95 | π 73.56 | πΆ 18.64 | 107 | MythoLogic Mini 7B Q5_K_M |
86 | 73 | 100 | π 74.65 | π 17.27 | 107 | Pygmalion 2 7B Q5_K_M |
87 | 27 | 156 | βπ§ 81.80 | π§ 6.10 | β» 68 | LLaMA-2 Chat 7B Q5_1 |
88 | 122 | 44 | π€ 61.46 | π 17.13 | β 139 | Nous Yarn 128K 7B Q5_K_M (ext. context maybe broken) |
89 | 78 | 97 | π 72.87 | πΆ 23.16 | 89 | Luna AI 7B Q8_0 |
90 | 50 | 131 | π 79.03 | π 16.06 | 89 | ELYZA Jp LLaMA-2 7B Q5_K_M |
91 | 64 | 115 | π 76.90 | π 17.85 | 93 | Medusa 1.3 7B Q5_K_M |
92 | 96 | 78 | π€ 68.32 | π 17.45 | β³ 118 | LLaMA 7B Q8_0 |
93 | 56 | 126 | π 78.11 | π§ 11.82 | 105 | Tulpar Limarp 7B Q5_K_M |
94 | 97 | 77 | π€ 68.03 | πΆ 19.27 | β³ 114 | Pygmalion Vicuna 7B Q5_K_M |
95 | 130 | 39 | π€ͺ 60.08 | π 17.07 | β 142 | OpenLLaMA v2 7B Q5_K_M |
96 | 61 | 125 | π 77.19 | π 17.68 | β» 82 | ELYZA Jp LLaMA-2 Instruct 7B Q5_K_M |
97 | 72 | 112 | π 74.83 | π 15.80 | 105 | StableBeluga 7B Q5_K_M |
98 | 39 | 154 | π§ 80.13 | π§ 6.42 | β» 70 | Photolens LLaMA 2 Langchain Chat 7B Q5_1 |
99 | 36 | 159 | π§ 80.36 | π§ 5.14 | β» 65 | LLaMA-2 Chat Code Cherry Pop 7B Q5_K_M |
100 | 162 | π 8 | π€ͺ 52.07 | πΆπΆ 23.99 | β 148 | π OpenLLaMA Open Instruct v2 7B Q8_0 |
101 | 126 | 52 | π€ 60.66 | πΆπΆ 25.23 | β³ 110 | Airoboros GPT4 1.4 7B Q5_K_M |
102 | 132 | 45 | π€ͺ 59.91 | πΆ 20.93 | β 123 | CodeLLaMA 7B Q5_K_M |
103 | 151 | π 23 | π€ͺ 53.40 | πΆπΆ 23.31 | β 131 | π Puma 3B Q5_1 |
104 | 139 | 38 | π€ͺ 57.32 | π 17.20 | β 146 | AlpacaCielo 2 8K 7B Q5_K_M (ext. context maybe broken) |
105 | 100 | 85 | π€ 67.22 | π 15.73 | β³ 120 | Nous Yarn 64K 7B Q5_K_M |
106 | 144 | π 34 | π€ͺ 54.26 | πΆ 19.70 | β 132 | π Deacon 3B Q5_0 |
107 | 75 | 122 | π 74.31 | π 16.95 | 96 | GOAT Community 7B Q5_1 |
108 | 107 | 84 | π€ 65.21 | πΆπΆ 27.30 | β» 86 | Lunaboros 7B Q4_K_M |
109 | 58 | 144 | π 77.48 | π§ 11.91 | β» 80 | LLaMA-2 32K 7B Q5_K_M (ext. context maybe broken) |
110 | 106 | 87 | π€ 65.21 | πΆπΆ 26.56 | β» 88 | Lunaboros LimaRP 7B Q4_K_M |
111 | 143 | 43 | π€ͺ 54.78 | πΆ 21.74 | β³ 121 | OpenLLaMA 7B Q5_K_M |
112 | 98 | 99 | π€ 67.80 | πΆπΆ 27.62 | β» 66 | Airoboros GPT4 2.0 7B Q5_K_M |
113 | 71 | 133 | π 75.52 | π 15.09 | 93 | Tulpar v0 7B Q4_0 |
114 | 62 | 145 | π 77.13 | π§ 10.80 | β» 84 | Tsukasa 7B Q5_K_M |
115 | 108 | 91 | π€ 64.86 | πΆπΆ 24.63 | β» 88 | Chinese Alpaca 2 7B Q5_K_S |
116 | 51 | 160 | π 79.03 | π§ 4.15 | β» 60 | MedLLaMA-2 Chat 7B Q5_K_S |
117 | 76 | 132 | π 73.56 | π 15.32 | 92 | Guanaco Uncensored 7B Q5_K_M |
118 | 92 | 113 | π€ 69.12 | π 16.32 | 104 | Metharme 7B Q5_1 |
119 | 53 | 161 | π 78.74 | π§ 5.81 | β» 46 | Trurl 2 Polish 7B Q5_1 |
120 | 65 | 152 | π 76.56 | π§ 6.02 | β» 76 | Merak v2 7B Q5_K_M |
121 | 153 | 47 | π€ͺ 53.17 | πΆ 18.76 | β 129 | Mamba GPT v4 3B Q5_1 |
122 | 133 | 72 | π€ͺ 59.10 | π 16.39 | β 123 | Hermes LLongMA 2 8K 7B Q5_1 (ext. context maybe broken) |
123 | 136 | 69 | π€ͺ 58.41 | πΆ 18.47 | β³ 119 | Leo Hessianai Chat 7B Q5_K_M |
124 | 70 | 150 | π 75.52 | π§ 9.05 | β» 72 | Vicuna v1.5 16K 7B Q5_K_M (ext. context maybe broken) |
125 | 112 | 101 | π€ 63.19 | πΆπΆ 25.41 | β» 69 | Airoboros GPT4 m2.0 7B Q5_K_M |
126 | 119 | 94 | π€ 62.21 | πΆπΆ 24.39 | β» 85 | Airoboros GPT4 m2.0 LLaMA-2 7B Q5_K_M |
127 | 135 | 76 | π€ͺ 58.76 | πΆπΆ 26.48 | 93 | WizardLM Uncensored 7B Q5_K_M |
128 | 104 | 114 | π€ 65.44 | π 15.84 | 104 | ALMA Pretrain 7B Q5_K_M |
129 | 85 | 141 | π 71.95 | π 14.51 | β» 75 | Chinese LLaMA-2 7B Q5_K |
130 | 131 | 92 | π€ͺ 60.02 | π 17.83 | β³ 109 | Vicuna CoT 7B Q5_K_M |
131 | 74 | 162 | π 74.31 | π§ 4.48 | β» 49 | LLaMA-2 Silverlin. Verilog 7B Q4_K_M |
132 | 95 | 137 | π€ 68.49 | π§ 13.02 | 94 | LLaMA-2 Galleon 7B Q5_K_M |
133 | 147 | 75 | π€ͺ 53.69 | π 14.68 | β 128 | Marx V2 3B Q4_1 |
134 | 90 | 147 | π€ 71.54 | π§ 8.45 | β» 78 | StableBeluga Samantha V3 7B Q4_0 |
135 | 155 | 71 | π€ͺ 53.00 | πΆ 19.39 | β³ 116 | OpenLLaMA 3B Q5_1 |
136 | 149 | 81 | π€ͺ 53.57 | π§ 11.93 | β 130 | OpenLLaMA v2 3B Q5_0 |
137 | 125 | 111 | π€ 60.89 | πΆ 22.61 | β» 72 | MAmmoTH 7B Q5_K_M |
138 | 165 | 64 | π€ͺ 51.15 | π 17.84 | β³ 121 | OpenBuddy OpenLLaMA v10 3B Q5_0 |
139 | 120 | 118 | π€ 61.69 | π 17.11 | 97 | Tulu 7B Q5_K_M |
140 | 113 | 127 | π€ 62.96 | π 13.98 | 98 | WizardCoder Python V1.0 7B Q5_K_M |
141 | 150 | 89 | π€ͺ 53.46 | π 17.04 | β³ 117 | Griffin 3B (link broken) Q4_1 |
142 | 124 | 124 | π€ 61.12 | π§ 11.78 | 107 | CodeLLaMA Instruct 7B Q5_K_M |
143 | 140 | 105 | π€ͺ 57.09 | π 17.44 | 103 | CodeLLaMA Python 7B Q5_K_M |
144 | 129 | 120 | π€ͺ 60.25 | π§ 13.56 | 108 | Gorilla 7B Q5_K_M |
145 | 160 | 86 | π€ͺ 52.30 | πΆ 21.09 | 106 | WizardVicuna Uncens Instr PL 3B Q5_1 |
146 | 134 | 121 | π€ͺ 58.99 | π§ 11.40 | β³ 110 | LLaMA-2 KO Chat 7B Q5_1 |
147 | 114 | 149 | π€ 62.79 | π§ 6.70 | β» 74 | Pandalyst V1.1 7B Q5_K_M |
148 | 142 | 119 | π€ͺ 56.22 | π§ 12.27 | β³ 110 | Mamba GPT v2 3B Q5_1 |
149 | 116 | 153 | π€ 62.44 | π§ 6.43 | β» 72 | LLaMA-2 KO 7B Q5_1 |
150 | 157 | 106 | π€ͺ 52.94 | π 18.05 | 98 | Open LLaMA 7B Q5_1 |
151 | 115 | 158 | π€ 62.67 | π§ 5.93 | β» 65 | Based 7B Q5_K_M |
152 | 137 | 135 | π€ͺ 58.12 | π§ 10.29 | 103 | PMC LLaMA 7B Q4_0 |
153 | 161 | 108 | π€ͺ 52.19 | π§ 11.77 | β³ 115 | Alpachino Baichuan Instruction 7B Q5_0 |
154 | 138 | 140 | π€ͺ 57.66 | π§ 13.01 | β» 85 | LMSYS Vicuna 1.5 7B Q5_1 |
155 | 141 | 138 | π€ͺ 56.85 | π 15.00 | β» 88 | Vicuna v1.5 7B Q5_K_M |
156 | 128 | 157 | π€ 60.43 | π§ 6.49 | β» 58 | Dolphin LLaMA-2 7B Q5_K_M |
157 | 123 | 163 | π€ 61.18 | π§ 2.81 | β» 52 | Scarlett 7B Q5_K_M |
158 | 166 | 116 | π€ͺ 50.81 | π§ 10.78 | β³ 114 | Baichuan 7B Q5_1 |
159 | 127 | 165 | π€ 60.60 | π§ 3.90 | β» 45 | Tulu Uncensored TV Alpaca 7B (link broken) Q5_1 |
160 | 152 | 143 | π€ͺ 53.34 | π 13.97 | β» 75 | Orca Mini 3B Q5_1 |
161 | 154 | 142 | π€ͺ 53.11 | π§ 13.15 | β» 78 | Komt LLaMA-2 Chat 7B Q5_K_M |
162 | 156 | 146 | π€ͺ 52.94 | π§ 8.64 | β» 80 | OpenLLaMA Odia 3B Q5_1 |
163 | 164 | 139 | π€ͺ 51.50 | π§ 12.99 | 89 | LLaMA Deus v3 7B Q4_0 |
164 | 158 | 148 | π€ͺ 52.88 | π§ 8.69 | β» 73 | Open Cabrita 3B Q5_1 |
165 | 163 | 164 | π€ͺ 51.50 | π§ 1.97 | β» 47 | WizardLM 7B Q5_K_M |
166 | 159 | 170 | π€ͺ 52.42 | π§ 0.00 | β» 0 | LLongMA 2 7B Q5_1 (ext. context maybe broken) |
167 | 167 | 167 | π€ͺ 47.58 | π§ 1.14 | β» 10 | TinyLLaMA Chat v0.2 1B Q5_K_M |
168 | 168 | 168 | π€ͺ 47.58 | π§ 0.00 | β» 0 | PY007 TinyLLaMA Chat v0.2 1B Q8_0 |
169 | 171 | 166 | π€ͺ 42.28 | π§ 1.64 | β» 16 | ToolLLaMA 7B Q5_1 |
170 | 169 | 169 | π€ͺ 47.58 | π§ 0.00 | β» 0 | LongChat v1.5 32K 7B Q5_K_M (ext. context maybe broken) |
171 | 170 | 171 | π€ͺ 47.58 | π§ 0.00 | β» 0 | LMSYS LongChat 1.5 32k 7B Q5_1 (ext. context maybe broken) |
13B Models
2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html
- See Ranking Changelog to see which GGUF/GGML Models were added
- Benchmark Results as CSV - Timestamp 20231004_193917
Rank | ALC-IQ Rank | ERP Rank | ALC-IQ | ERP Score | ERP Var Score | Model |
---|---|---|---|---|---|---|
π₯ 1 | π 2 | π 13 | βπ§ 93.20 | πΆπΆ 26.59 | β 147 | π₯ππ MLewdBoros LRPSGPT 2Char 13B Q5_K_M |
π₯ 2 | π 1 | π 20 | βπ§ 93.43 | πΆπΆ 27.08 | β 140 | π₯ππ Athena v1 13B Q5_K_M |
π₯ 3 | π 16 | π 5 | βπ§ 91.88 | πΆπΆ 27.82 | β 149 | π₯ππ MLewdBoros 13B Q5_K_M |
π₯ 4 | π 5 | π 27 | βπ§ 92.97 | πΆπΆ 26.10 | β 141 | π₯ππ Airoboros 2.1 13B Q5_K_M |
π₯ 5 | π 20 | π 17 | βπ§ 91.36 | πΆπΆ 29.75 | β 136 | π₯ππ Pygmalion 2 SuperCOT 13B Q5_K_M |
π₯ 6 | π 14 | π 32 | βπ§ 92.22 | πΆ 25.59 | β 145 | π₯ππ ReMM Mistral 13B Q5_K_M |
π₯ 7 | 38 | π 4 | βπ§ 89.98 | πΆπΆ 28.91 | β 145 | π₯π Slerpeno 13B Q5_K_M |
π₯ 8 | π 12 | π 39 | βπ§ 92.51 | πΆπΆ 26.47 | β 133 | π₯ππ Amethyst 13B Q5_K_M |
π₯ 9 | π 22 | π 31 | βπ§ 91.07 | πΆπΆ 28.20 | β 133 | π₯ππ ReMM v2 13B Q5_K_M |
π₯ 10 | π 13 | π 43 | βπ§ 92.51 | πΆπΆ 26.96 | β³ 129 | π₯ππ Amethyst Mistral 13B Q4_K_S |
π₯ 11 | π 4 | 55 | βπ§ 93.03 | πΆ 24.94 | β 136 | π₯π MythoMix 13B Q5_K_M |
π₯ 12 | 32 | π 25 | βπ§ 90.26 | πΆπΆ 29.09 | β 134 | π₯π AppleSauce 13B Q5_K_M |
π₯ 13 | π 18 | π 46 | βπ§ 91.53 | πΆπΆ 26.83 | β³ 127 | π₯ππ MythoMakiseMerged 13B Q5_K_M |
π₯ 14 | 45 | π 16 | βπ§ 89.52 | πΆπΆ 26.95 | β 144 | π₯π MLewd V2-1 015 13B Q4_K_S |
π₯ 15 | 40 | π 22 | βπ§ 89.92 | πΆπΆ 25.69 | β 156 | π₯π Spicyboros 2.2_2 13B Q5_K_M |
π₯ 16 | 31 | π 33 | βπ§ 90.32 | πΆπΆ 26.75 | β 136 | π₯π Airoboros Creative lmoe 13B Q5_K_M |
π₯ 17 | 47 | π 21 | π§ 89.34 | πΆπΆ 27.02 | β 139 | π₯π Athena v2 13B Q5_K_M |
π₯ 18 | 29 | π 44 | βπ§ 90.38 | πΆπΆ 28.19 | β³ 126 | π₯π ReMM v2.2 13B Q5_K_M |
π₯ 19 | π 27 | π 48 | βπ§ 90.44 | πΆπΆ 26.31 | β³ 131 | π₯ππ OpenRP 13B Q5_K_M |
π₯ 20 | 28 | π 47 | βπ§ 90.44 | πΆ 23.88 | β 145 | π₯π Redmond Puffin 13B Q5_1 |
π₯ 21 | 65 | π 6 | π§ 88.65 | πΆπΆ 28.26 | β 147 | π₯π MLewd v2-2 13B Q5_K_M |
π₯ 22 | π 17 | 66 | βπ§ 91.65 | πΆπΆ 28.64 | β³ 119 | π₯π ReMM 0.65 SLERP 13B Q5_K_M |
π₯ 23 | π 24 | 58 | βπ§ 90.90 | πΆπΆ 27.80 | β³ 124 | π₯π ReMM v2.1 13B Q5_K_M |
π₯ 24 | π 6 | 80 | βπ§ 92.86 | πΆπΆ 26.11 | β³ 122 | π₯π MythoMax Kimiko V2 13B Q5_K_M |
π₯ 25 | 33 | π 52 | βπ§ 90.21 | πΆπΆ 27.40 | β³ 125 | π₯π MLewdBoros SuperCOT 13B Q5_K_M |
π₯ 26 | 39 | π 53 | βπ§ 89.92 | πΆπΆ 29.90 | β³ 121 | π₯π BerrySauce 13B Q5_K_M |
π₯ 27 | 34 | 62 | βπ§ 90.21 | πΆ 23.34 | β 139 | π₯ Stheno 1.3 13B Q5_K_M |
π₯ 28 | 89 | π 2 | π 87.33 | πΆπΆ 29.01 | β 151 | π₯π MLewd V2-1 13B Q5_K_M |
π₯ 29 | 46 | 56 | βπ§ 89.46 | πΆ 25.36 | β 134 | π₯ MLewd Chat 13B Q5_K_M |
π₯ 30 | 86 | π 10 | π 87.38 | πΆπΆ 26.75 | β 147 | π₯π Unholy v1 10L 13B Q5_K_M |
π₯ 31 | 30 | 79 | βπ§ 90.38 | πΆ 23.05 | β 136 | π₯ Magpie 13B Q5_K_M |
π₯ 32 | 55 | π 50 | π§ 88.94 | πΆπΆ 26.38 | β³ 129 | π₯π Pygmaltion 2 SuperCOT weighted 13B Q5_K_M |
π₯ 33 | 87 | π 14 | π 87.38 | πΆπΆ 26.75 | β 147 | π₯π Unholy v1 13B Q5_K_M |
π₯ 34 | 93 | π 11 | π 87.10 | πΆπΆ 26.69 | β 147 | π₯π Unholy v1 12L 13B Q5_K_M |
π₯ 35 | π 19 | 102 | βπ§ 91.36 | πΆ 25.64 | β³ 118 | π₯π ReMM v2 Kimiko v2 13B Q5_K_M |
π₯ 36 | 72 | π 40 | π 88.25 | πΆ 23.56 | β 157 | π₯π ZettaPi 13B Q5_K_M |
π₯ 37 | 42 | 76 | βπ§ 89.86 | πΆ 24.61 | β³ 131 | π₯ UndiMix v3 13B Q5_K_M |
π₯ 38 | 54 | 67 | π§ 89.00 | π 22.64 | β 142 | π₯ Airoboros L2 2.2.1 13B Q5_K_M |
π₯ 39 | 64 | 57 | π§ 88.65 | πΆ 25.53 | β 133 | π₯ Teknium OpenHermes 13B Q5_K_S |
π₯ 40 | 50 | 77 | π§ 89.23 | πΆπΆ 26.10 | β³ 123 | π₯ ReMM v2 Variant 13B Q5_K_M |
π₯ 41 | 52 | 78 | π§ 89.06 | πΆ 22.80 | β 137 | π₯ Airoboros 2.2 13B Q5_K_M |
π₯ 42 | 96 | π 26 | π 86.87 | πΆπΆ 26.75 | β 138 | π₯π ReMM 13B Q5_K_M |
π₯ 43 | 98 | π 24 | π 86.69 | πΆ 25.45 | β 157 | π₯π MLewd V2-1 050 13B Q4_K_S |
π₯ 44 | 35 | 100 | βπ§ 90.21 | πΆ 25.09 | β³ 121 | π₯ Chronos Beluga 13B Q5_K_M |
π₯ 45 | 113 | π 8 | π 86.00 | πΆπΆ 26.33 | β 163 | π₯π Stheno Inverted 1.2 13B Q5_K_M |
π₯ 46 | 101 | π 23 | π 86.64 | πΆπΆ 27.03 | β 138 | π₯π MLewd v2 13B Q5_K_M |
π₯ 47 | π 3 | 141 | βπ§ 93.20 | πΆ 25.64 | 109 | π₯π MythoMaxKurisu 13B Q5_K_M |
π₯ 48 | 60 | 75 | π§ 88.71 | πΆ 23.25 | β 136 | π₯ Spicyboros 2.2 13B Q4_K_M |
π₯ 49 | 58 | 82 | π§ 88.82 | πΆπΆ 26.30 | β³ 120 | π₯ Chronolima Airo Grad 13B Q5_K_M |
π₯ 50 | 56 | 89 | π§ 88.94 | πΆ 24.99 | β³ 124 | π₯ UndiMix v4 13B Q5_K_M |
π₯ 51 | π 9 | 146 | βπ§ 92.57 | πΆ 24.20 | 111 | π₯π Huginn v1.2 13B Q5_K_M |
π₯ 52 | π 15 | 142 | βπ§ 92.17 | π 18.04 | β³ 129 | π₯π Huginn 13B Q5_K_M |
π₯ 53 | π 10 | 148 | βπ§ 92.57 | πΆ 24.20 | 111 | π₯π ReMM SLERP 13B Q5_K_M |
π₯ 54 | 124 | π 12 | π 85.54 | πΆπΆ 26.24 | β 156 | π₯π Holomax 13B Q5_K_M |
π₯ 55 | 104 | π 38 | π 86.58 | πΆ 25.55 | β 139 | π₯π ReMM Lion 13B Q5_K_M |
π₯ 56 | 94 | π 51 | π 87.04 | πΆ 24.89 | β 137 | π₯π LLaMA-2 Chat Uncensored 13B Q5_1 |
π₯ 57 | π 11 | 151 | βπ§ 92.57 | πΆ 24.20 | 111 | π₯π MythoMax 13B Q5_K_M |
π₯ 58 | 102 | π 45 | π 86.64 | πΆ 25.29 | β 138 | π₯π Chronos Hermes 2 13B Q5_K_M |
π₯ 59 | 71 | 85 | π 88.31 | πΆπΆ 29.46 | 113 | π₯ Blind Test Janus 13B Q5_1 |
π₯ 60 | π 23 | 143 | βπ§ 91.01 | π 22.52 | β³ 119 | π₯π Emerhyst 13B Q5_K_M |
π₯ 61 | 81 | 74 | π 87.56 | πΆπΆ 28.50 | β³ 117 | π₯ Pygmalion 2 SuperCOT2 13B Q5_K_M |
π₯ 62 | 36 | 129 | βπ§ 90.09 | πΆπΆ 27.53 | 101 | π₯ OpenRP SuperCOT 13B Q5_K_M |
π₯ 63 | 57 | 108 | π§ 88.94 | π 22.06 | β 133 | π₯ Orca Mini v3 13B Q5_K_M |
π₯ 64 | 146 | π 7 | π€ 84.22 | πΆπΆ 29.48 | β 140 | π₯π OpenAssistant LLaMA-2 8k Orca 13B Q5_K_M (ext. context maybe broken) |
π₯ 65 | π 7 | 179 | βπ§ 92.86 | πΆ 23.29 | 105 | π₯π MythoMax Kimiko Mix 13B Q5_K_M |
π₯ 66 | 49 | 131 | π§ 89.29 | πΆ 24.63 | 114 | π₯ Airolima Chronos Grad 13B Q5_K_M |
π₯ 67 | 135 | π 28 | π 84.85 | πΆπΆ 26.33 | β 139 | π₯π qCammel L2 13B Q5_K_M |
π₯ 68 | 151 | π 9 | π€ 83.76 | πΆπΆ 27.86 | β 142 | π₯π Athena v3 13B Q5_K_M |
π₯ 69 | 62 | 117 | π§ 88.65 | π 17.56 | β 136 | π₯ Stheno Chat 13B Q5_K_M |
π₯ 70 | 59 | 122 | π§ 88.71 | πΆπΆ 26.05 | 111 | π₯ Unholy v1.1 13B Q5_K_M |
π₯ 71 | 48 | 136 | π§ 89.34 | π 21.03 | β³ 125 | π₯ StableBeluga 13B Q5_K_M |
π₯ 72 | 109 | 69 | π 86.12 | πΆπΆ 25.86 | β³ 126 | π₯ Airoboros GPT4 1.4.1 13B Q5_K_M |
π₯ 73 | 91 | 92 | π 87.15 | πΆ 24.94 | β³ 124 | π₯ Mistral PetroLimaRP v3 12B Q5_K_M |
π₯ 74 | 168 | π 1 | π€ 80.82 | πΆπΆ 28.11 | β 164 | π₯π Legerdemain 13B Q5_K_M |
π₯ 75 | 83 | 106 | π 87.44 | π 21.62 | β 134 | π₯ Pygmalion 2 13B Q5_K_M |
π₯ 76 | π 25 | 176 | βπ§ 90.67 | π§ 14.06 | β³ 125 | π₯π Inkbot 4k 13B Q4_K_M |
π₯ 77 | 144 | π 36 | π€ 84.27 | πΆπΆ 25.85 | β 138 | π₯π Stheno Inverted 13B Q5_K_M |
π₯ 78 | 122 | 63 | π 85.60 | πΆ 25.19 | β³ 131 | π₯ MegaMix S1 13B Q5_K_M |
π₯ 79 | 130 | π 54 | π 85.14 | πΆπΆ 25.70 | β 133 | π₯π ReMM PIPPA 13B Q5_K_M |
π₯ 80 | 125 | 60 | π 85.48 | πΆπΆ 26.64 | β³ 125 | π₯ ReMM v1 LRPSGPT 2Char 13B Q5_K_M |
π₯ 81 | π 21 | 185 | βπ§ 91.07 | π 16.73 | β³ 117 | π₯π LlongOrca 16K 13B Q5_K_M (ext. context maybe broken) |
π₯ 82 | 66 | 132 | π§ 88.65 | π 22.06 | β³ 124 | π₯ Kimiko V2 13B Q5_K_M |
π₯ 83 | 153 | π 29 | π€ 83.24 | πΆπΆ 28.87 | β 133 | π₯π ReMM S Kimiko v2 13B Q5_K_M |
π₯ 84 | 79 | 118 | π 87.90 | π 22.34 | β³ 126 | π₯ Kimiko 13B Q5_K_M |
π₯ 85 | 120 | 72 | π 85.77 | πΆ 24.29 | β 132 | π₯ GradientPutri MegaMix S1 13B Q5_K_S |
π₯ 86 | 51 | 155 | π§ 89.23 | πΆ 23.27 | 114 | π₯ Vigogne 2 13B Q5_K_M |
π₯ 87 | 76 | 126 | π 88.02 | πΆπΆ 30.14 | β» 94 | π₯ Airochronos 13B Q5_K_M |
π₯ 88 | 171 | π 15 | π€ 80.36 | πΆπΆ 26.06 | β 157 | π₯π Huginn v3 13B Q5_K_M |
π₯ 89 | 103 | 99 | π 86.58 | πΆ 24.99 | β³ 122 | π₯ Saiga 2 13B Q5_K |
π₯ 90 | 74 | 134 | π 88.13 | πΆ 25.43 | 111 | π₯ MythoLogic 13B Q5_K_M |
π₯ 91 | 172 | π 18 | π€ 80.36 | πΆπΆ 26.06 | β 157 | π₯π Huginn v4 13B Q5_K_M |
π₯ 92 | 119 | 83 | π 85.83 | πΆ 25.14 | β³ 125 | π₯ Mythalion 13B Q5_K_M |
π₯ 93 | 173 | π 19 | π€ 80.36 | πΆπΆ 26.06 | β 157 | π₯π Huginn v4.5 13B Q5_K_M |
π₯ 94 | 44 | 175 | βπ§ 89.57 | π 19.85 | β³ 118 | π₯ Redmond Puffin v1.3 13B Q5_K_M |
95 | 189 | π 3 | π€ 76.96 | πΆπΆ 29.33 | β 146 | π Airoboros 2.1 YaRN 64K 13B Q5_K_M |
96 | 134 | 71 | π 84.85 | πΆ 23.97 | β 135 | Guanaco Uncensored 13B Q5_K_M |
97 | 115 | 95 | π 86.00 | πΆ 23.87 | β³ 126 | Firefly v1.2 13B Q5_K_M |
98 | 121 | 90 | π 85.71 | πΆπΆ 26.04 | β³ 120 | Fireflx v1.2 13B Q5_K_M |
99 | 82 | 137 | π 87.56 | π 21.44 | β³ 124 | Chronos Hermes v2 13B Q5_K_M |
100 | 169 | π 34 | π€ 80.70 | πΆπΆ 25.78 | β 142 | π MLewd v1 13B Q5_K_M |
101 | 141 | 68 | π€ 84.62 | πΆ 25.33 | β³ 131 | Camel Platypus 2 13B Q5_K_M |
102 | 63 | 163 | π§ 88.65 | π 22.40 | 115 | MXLewdMini 13B Q5_K_M |
103 | 77 | 149 | π 88.02 | πΆπΆ 31.72 | β» 78 | Airoboros GPT4 2.0 13B Q5_K_M |
104 | 99 | 123 | π 86.69 | π 21.37 | β³ 127 | h2oGPT 13B (link broken) Q5_K_M |
105 | 163 | π 49 | π€ 81.51 | πΆ 23.18 | β 150 | π Huginn v1.3 13B Q5_K_M |
106 | 177 | π 35 | π€ 79.78 | πΆπΆ 26.07 | β 136 | π MegaMix T1 13B Q5_K_M |
107 | 69 | 165 | π§ 88.42 | π 20.60 | β³ 119 | Stheno 1.8 13B Q5_K_M |
108 | 176 | π 37 | π€ 79.84 | πΆπΆ 26.15 | β 136 | π MLewd v1-7 TRY2 13B Q5_K_M |
109 | 67 | 170 | π§ 88.59 | π 18.80 | β³ 120 | Stable Platypus 2 13B Q5_K_M |
110 | 108 | 121 | π 86.41 | π 19.82 | β 132 | Chronos 2 13B Q5_K_M |
111 | 184 | π 30 | π€ 78.00 | πΆπΆ 28.05 | β 135 | π AlpacaCielo 13B Q5_K_M |
112 | 175 | π 41 | π€ 79.95 | πΆ 24.02 | β 150 | π LLongMA-2 Storysummarizer 13B Q5_K_M (ext. context maybe broken) |
113 | 75 | 162 | π 88.13 | πΆ 24.39 | 108 | Chronoboros Grad 13B Q5_K_M |
114 | 80 | 157 | π 87.62 | πΆπΆ 31.26 | β» 73 | Airoboros GPT4 2.0 LLaMA-2 13B Q5_K_M |
115 | 68 | 172 | π§ 88.48 | π 18.52 | β³ 120 | UndiMix v2 13B Q5_K_M |
116 | 145 | 81 | π€ 84.22 | πΆ 23.08 | β 136 | Platypus 2 13B Q5_K_M |
117 | π 8 | 246 | βπ§ 92.80 | π§ 12.13 | β» 81 | π LLaMA-2 Ensemble v6 13B Q5_K_M |
118 | 110 | 124 | π 86.12 | πΆπΆ 32.04 | β» 92 | Thorns 13B Q5_K_M |
119 | 129 | 104 | π 85.20 | πΆ 22.98 | β³ 129 | StableBeluga Instruct PL Lora 13B Q5_1 |
120 | 170 | 65 | π€ 80.53 | πΆ 23.01 | β 141 | Gywy Chinese v1 13B Q5_1 |
121 | 133 | 111 | π 84.97 | πΆπΆ 26.09 | 114 | Hermes Kimiko 13B Q5_K_M |
122 | 111 | 138 | π 86.06 | π 22.40 | β³ 121 | Chronohermes Grad 13B Q5_K_M |
123 | 178 | 59 | π€ 79.26 | πΆ 25.63 | β 132 | MLewd 13B Q5_K_M |
124 | 70 | 192 | π 88.31 | π§ 13.44 | β³ 118 | LLaMA-2 Chat AYT 13B Q5_K_M |
125 | 157 | 93 | π€ 82.95 | πΆπΆ 29.00 | 112 | Crestfall FrankenMon 13B Q5_K_M |
126 | 164 | 86 | π€ 81.22 | πΆ 24.78 | β³ 125 | MegaMix A1 13B Q5_K_M |
127 | 53 | 222 | π§ 89.06 | π§ 14.21 | 102 | TerraMix 16K 13B Q5_K_M (ext. context maybe broken) |
128 | 181 | 73 | π€ 78.69 | πΆπΆ 28.48 | β³ 117 | Frank Uncensored 13B Q5_K_M |
129 | 43 | 243 | βπ§ 89.69 | π§ 11.99 | β» 89 | WizardLM 1.2 PL 13B Q5_1 |
130 | 155 | 110 | π€ 83.06 | πΆ 23.71 | β³ 124 | Frankensteins Monster 13B Q4_K_S |
131 | π 26 | 265 | βπ§ 90.61 | π§ 5.88 | β» 70 | π PuddleJumper 13B Q5_K_M |
132 | 167 | 96 | π€ 80.93 | πΆπΆ 26.54 | 116 | OniiChat Hermes Limarp 13B Q5_K_M |
133 | 97 | 181 | π 86.75 | π 19.23 | 115 | LLaMA-2 Mistral 13B Q5_K_M |
134 | 37 | 253 | βπ§ 90.09 | π§ 8.02 | β» 78 | WizardLM v1.2 13B Q4_0 |
135 | 126 | 147 | π 85.37 | πΆ 23.61 | 113 | Nous Hermes 13B Q5_K_M |
136 | 78 | 208 | π 87.90 | π§ 15.16 | 109 | UndiMix v1 13B Q5_K_M |
137 | 128 | 150 | π 85.20 | π 22.53 | β³ 117 | Nous Hermes LLaMA-2 13B Q5_K_M |
138 | 174 | 98 | π€ 80.18 | πΆ 24.93 | β³ 123 | Stheno 13B Q5_K_M |
139 | 149 | 128 | π€ 83.93 | πΆ 22.69 | β³ 122 | LLaMA-2 Guanaco 13B Q4_1 |
140 | 132 | 156 | π 85.02 | π 19.86 | β³ 122 | EverythingLM V3 16K 13B Q5_K_M (ext. context maybe broken) |
141 | 41 | 267 | βπ§ 89.92 | π§ 4.50 | β» 60 | Speechless LLaMA-2 13B Q5_K_M |
142 | 88 | 211 | π 87.33 | π§ 16.72 | 105 | UltraLM v2.0 13B Q5_K_M |
143 | 84 | 216 | π 87.44 | π§ 13.63 | 106 | Spring Dragon 13B Q5_K_M |
144 | 216 | 61 | π€ͺ 70.97 | π 22.40 | β 153 | Nous Yarn 128K 13B Q5_K_M (ext. context maybe broken) |
145 | 73 | 234 | π 88.25 | π§ 12.34 | β» 94 | LLaMA-2 LoRA Assemble 13B Q5_K_M |
146 | 199 | 84 | π€ 74.83 | π 19.55 | β 148 | Dans RetroRodeo 13B Q5_K_M |
147 | 197 | 88 | π€ 75.75 | πΆ 24.38 | β³ 126 | Nous Hermes Writer 13B Q4_K_S |
148 | 185 | 105 | π€ 77.76 | πΆ 23.59 | β³ 125 | WizardMath V1.0 13B Q5_K_M |
149 | 188 | 103 | π€ 77.13 | π 22.62 | β³ 131 | Nous Yarn 64K 13B Q5_K_M |
150 | 222 | 64 | π€ͺ 68.43 | πΆ 23.52 | β 138 | Chronos Hermes SuperHOT 8K 13B Q5_1 (ext. context maybe broken) |
151 | 85 | 230 | π 87.38 | π§ 11.67 | 102 | Marcoroni 13B Q5_K_M |
152 | 143 | 161 | π€ 84.33 | πΆ 24.71 | 108 | Hermes LimaRP 13B Q4_K_M |
153 | 105 | 207 | π 86.58 | π 16.93 | 106 | Mythical Destroyer V2 13B (link broken) Q5_K_M |
154 | 158 | 144 | π€ 82.72 | πΆ 23.49 | 115 | Chronorctypus Limarobormes 13B Q5_K_M |
155 | 112 | 203 | π 86.06 | π 18.32 | 109 | OpenChat v3.2 13B Q5_K_M |
156 | 127 | 186 | π 85.31 | π§ 14.83 | β³ 119 | OpenOrcaxOpenChat Preview2 13B Q5_1 |
157 | 150 | 160 | π€ 83.87 | π 22.34 | β³ 117 | Synthia 13B Q5_K_M |
158 | 61 | 269 | π§ 88.71 | π§ 4.34 | β» 46 | Iubaris V3 13B Q5_K_M |
159 | 117 | 202 | π 85.89 | π 17.65 | 110 | LosslessMegaCoder Mini 13B Q5_K_M |
160 | 95 | 231 | π 86.92 | π§ 12.34 | β» 99 | LLaMA-2 Chat Limarp v2 13B Q5_K_M |
161 | 191 | 116 | π€ 76.27 | πΆ 25.12 | β³ 117 | Manticore SuperHOT 8K 13B Q5_K_M (ext. context maybe broken) |
162 | 123 | 198 | π 85.60 | π§ 16.16 | 112 | OpenBuddy LLaMA-2 v11.1 13B Q5_K_M |
163 | 183 | 127 | π€ 78.17 | πΆπΆ 30.71 | β» 93 | Airoboros GPT4 m2.0 13B Q5_K_M |
164 | 182 | 135 | π€ 78.51 | π 17.65 | β 132 | Holodeck 1 13B Q5_K |
165 | 154 | 169 | π€ 83.12 | π 21.42 | 116 | ALMA Pretrain 13B Q5_K_M |
166 | 260 | π 42 | π€ͺ 61.52 | πΆ 24.76 | β 143 | π Hermes LLongMA 2 8K 13B Q5_1 (ext. context maybe broken) |
167 | 100 | 239 | π 86.64 | π§ 12.88 | β» 91 | OpenOrca STX 13B Q5_K_M |
168 | 92 | 251 | π 87.15 | π§ 7.45 | β» 86 | Samantha 1.11 13B Q5_K_M |
169 | 207 | 114 | π€ 72.12 | π 19.75 | β 136 | Vicuna 1.3 PL 13B Q5_1 |
170 | 136 | 200 | π 84.74 | π 19.43 | 107 | CalliopeDS 13B Q5_K_M |
171 | 210 | 115 | π€ͺ 71.89 | π 19.80 | β 135 | MAmmoTH 13B Q5_K_M |
172 | 90 | 260 | π 87.33 | π§ 7.97 | β» 74 | Speechless Hermes Orca Plat WizLM 13B Q5_K_M |
173 | 137 | 204 | π 84.74 | π 16.80 | 109 | LLaMA-2 Ensemble v5 13B Q5_K_M |
174 | 235 | 87 | π€ͺ 66.24 | πΆ 24.15 | β³ 127 | LLaMA SuperCOT 13B Q5_K_M |
175 | 165 | 173 | π€ 81.16 | πΆ 22.88 | 111 | Stheno 1.2 13B Q5_K_M |
176 | 252 | 70 | π€ͺ 63.82 | π 22.56 | β 142 | Chronos Hermes 13B Q5_K_M |
177 | 195 | 140 | π€ 75.92 | πΆπΆ 28.25 | β» 93 | Airoboros GPT4 m2.0 LLaMA-2 13B Q5_K_M |
178 | 159 | 184 | π€ 82.60 | π 17.85 | 116 | Dans QuestionableCocktail 2 13B Q4_1 |
179 | 234 | 94 | π€ͺ 66.42 | πΆπΆ 28.52 | 113 | Airoboros GPT4 1.3 13B Q5_1 |
180 | 131 | 219 | π 85.08 | π 18.80 | β» 90 | Tsukasa Limarp 16K 13B Q5_K_M (ext. context maybe broken) |
181 | 106 | 249 | π 86.52 | π§ 9.47 | β» 83 | Mythical Destroyer 13B Q5_K_M |
182 | 107 | 250 | π 86.46 | π§ 8.65 | β» 86 | Athena-tmp 13B Q5_K_M |
183 | 139 | 212 | π€ 84.62 | π 16.80 | 103 | OpenOrca Platypus 2 13B Q5_K_M |
184 | 244 | 91 | π€ͺ 64.63 | πΆπΆ 26.10 | β³ 119 | MythoBoros 13B Q5_K_M |
185 | 160 | 195 | π€ 82.32 | π§ 14.65 | 114 | OpenOrcaxOpenChat 2 LangChain Chat 13B Q5_1 |
186 | 118 | 247 | π 85.83 | π§ 11.03 | β» 84 | ChatAYT Lora Assamble Marcoroni 13B Q5_K_M |
187 | 161 | 199 | π€ 82.20 | π 18.20 | 110 | Vicuna v1.5 16K 13B Q5_K_M (ext. context maybe broken) |
188 | 180 | 177 | π€ 78.86 | π 21.18 | 113 | YuLan Chat 2 13B Q5_K_M |
189 | 116 | 254 | π 86.00 | π§ 8.08 | β» 78 | LLaMA-2 Chinese Chat 13B Q5_1 |
190 | 114 | 257 | π 86.00 | π§ 6.74 | β» 78 | LLaMA-2 13B Q5_K_M |
191 | 142 | 225 | π€ 84.56 | π§ 12.67 | 103 | LLaMA-2 LangChain Chat 13B Q5_K_S |
192 | 156 | 209 | π€ 83.01 | π 17.96 | 104 | Sentdex WSB GPT 13B Q5_K_M |
193 | 202 | 159 | π€ 72.81 | πΆ 23.57 | 111 | Manticore 13B Q5_K_M |
194 | 242 | 112 | π€ͺ 65.15 | πΆ 23.25 | β³ 125 | Wizard Vicuna Uncensored SuperHOT 8k 13B Q5_K_S (ext. context maybe broken) |
195 | 138 | 237 | π 84.74 | π§ 8.75 | β» 98 | LLaMA-2 Chat 13B Q5_1 |
196 | 248 | 107 | π€ͺ 64.23 | π 22.36 | β³ 131 | MyhtoLogic 13B Q5_K_M |
197 | 247 | 109 | π€ͺ 64.29 | πΆ 22.80 | β³ 127 | Guanaco 13B Q5_K_M |
198 | 254 | 101 | π€ͺ 63.54 | π 20.82 | β 136 | Chronos 13B Q5_K_M |
199 | 179 | 193 | π€ 79.03 | πΆ 23.06 | β» 94 | Dans MythsteryModel 13B Q5_K_M |
200 | 212 | 154 | π€ͺ 71.49 | πΆ 24.43 | 110 | JanniesBasedLigma 13B Q5_K_M |
201 | 213 | 153 | π€ͺ 71.43 | π 21.15 | β³ 121 | Tsukasa Limarp 13B Q5_K_M |
202 | 204 | 164 | π€ 72.47 | π 19.03 | β³ 122 | CodeLLaMA Oasst SFT V10 13B Q5_K_M |
203 | 261 | 97 | π€ͺ 60.77 | πΆ 23.57 | β³ 127 | OpenLLaMA 13B Q5_K_M |
204 | 243 | 119 | π€ͺ 64.75 | π 21.20 | β³ 129 | Chronos WizardLM UC SCOT ST 13B Q5_K_M |
205 | 140 | 245 | π€ 84.62 | π§ 8.72 | β» 90 | Luban 13B Q5_K_M |
206 | 245 | 120 | π€ͺ 64.63 | πΆ 23.07 | β³ 123 | OpenBuddy OpenLLaMA v7 13B Q4_K |
207 | 230 | 139 | π€ͺ 67.17 | πΆ 23.79 | 114 | WizardLM V1.0 Uncensored 13B Q5_K_M |
208 | 238 | 133 | π€ͺ 65.78 | πΆ 24.05 | 115 | Chimera 13B Q5_K_M |
209 | 186 | 197 | π€ 77.59 | π 18.79 | 110 | Barcenas 13B Q5_K_M |
210 | 224 | 152 | π€ͺ 68.15 | π 21.75 | β³ 120 | Chronos SuperHOT 8K 13B Q5_K_M (ext. context maybe broken) |
211 | 152 | 241 | π€ 83.35 | π§ 11.90 | β» 92 | Trurl 2 Polish 13B Q5_1 |
212 | 206 | 178 | π€ 72.29 | π 20.37 | 114 | CAMEL Combined Data 13B Q5_K_M |
213 | 218 | 167 | π€ͺ 69.99 | πΆ 23.13 | 111 | Minotaur 13B Q5_K_M |
214 | 201 | 189 | π€ 73.10 | π 19.90 | 110 | Tulu 13B Q5_K_M |
215 | 265 | 113 | π€ͺ 57.89 | πΆπΆ 26.16 | 113 | Petra Instruct 13B Q5_K_M |
216 | 166 | 232 | π€ 80.99 | π§ 14.00 | β» 95 | Trurl 2 Polish Instruct 13B Q5_1 |
217 | 147 | 256 | π€ 84.10 | π§ 6.80 | β» 78 | Codeup Alpha 13B Q5_K_M |
218 | 253 | 130 | π€ͺ 63.77 | π 22.02 | β³ 125 | Alpacino SuperCOT 13B Q4_0 |
219 | 223 | 168 | π€ͺ 68.20 | πΆ 24.15 | 107 | Hypermantis 13B Q5_K_M |
220 | 225 | 166 | π€ͺ 68.03 | π 21.14 | β³ 119 | MedAlpaca 13B Q5_1 |
221 | 208 | 187 | π€ͺ 72.00 | π 22.22 | 107 | Heegyu LIMA2 13B Q5_1 |
222 | 148 | 259 | π€ 84.10 | π§ 6.80 | β» 78 | h2oGPT Chat 13B (link broken) Q5_K_M |
223 | 236 | 158 | π€ͺ 66.24 | π 22.27 | β³ 118 | Dans PersonalityEngine 13B Q5_1 |
224 | 264 | 125 | π€ͺ 59.39 | π 22.47 | β³ 124 | Nous-Hermes 13B Q4_0 |
225 | 198 | 205 | π€ 75.06 | π 20.50 | β» 96 | Vicuna 1.5 13B Q5_0 |
226 | 220 | 183 | π€ͺ 68.61 | π 22.27 | 109 | WizardMega 13B Q5_K_M |
227 | 194 | 215 | π€ 75.92 | π§ 16.27 | 101 | Chinese Alpaca 2 13B Q5_K |
228 | 231 | 171 | π€ͺ 66.71 | π 21.16 | β³ 117 | OpenBuddy LLaMA-2 v8.1 13B Q3_K |
229 | 162 | 255 | π€ 81.68 | π§ 6.74 | β» 79 | CodeUp LLaMA-2 Chat 13B Q4_K_M |
230 | 232 | 174 | π€ͺ 66.59 | πΆ 24.15 | 105 | HyperMantis 13B Q5_K_M |
231 | 196 | 218 | π€ 75.75 | π 19.48 | β» 89 | WizardLM 1.0 Uncensored 13B Q5_K_M |
232 | 200 | 214 | π€ 74.71 | π 19.62 | β» 94 | LLaMA-2 Instruct Uncensored 13B Q5_0 |
233 | 258 | 145 | π€ͺ 62.90 | πΆ 23.79 | 113 | Carl 13B Q5_K_M |
234 | 221 | 190 | π€ͺ 68.49 | π 18.52 | 113 | LLaMA 13B Q5_K_M |
235 | 190 | 228 | π€ 76.61 | π§ 14.21 | β» 98 | Manticore Chat Pyg 13B Q5_K_M |
236 | 217 | 196 | π€ͺ 70.56 | π§ 15.06 | 113 | Chinese LLaMA-2 13B Q5_K |
237 | 192 | 229 | π€ 76.15 | π§ 14.96 | β» 94 | Manticore Chat Pyg SuperHOT 8K 13B Q5_K_M (ext. context maybe broken) |
238 | 187 | 236 | π€ 77.59 | π§ 16.13 | β» 82 | Vicuna v1.5 13B Q5_K_M |
239 | 227 | 194 | π€ͺ 67.68 | π 20.98 | 107 | CAMEL Role Playing Data 13B Q5_K_M |
240 | 215 | 210 | π€ͺ 71.08 | π 21.44 | β» 91 | BlueMethod 13B Q5_K_M |
241 | 209 | 220 | π€ͺ 72.00 | π§ 14.58 | 101 | OpenBuddy Atom v9 13B Q5_K |
242 | 239 | 188 | π€ͺ 65.67 | π 22.30 | 104 | Ouroboros 13B Q5_K_M |
243 | 193 | 244 | π€ 75.98 | π§ 12.95 | β» 81 | LoKuS 13B Q5_K_M |
244 | 211 | 226 | π€ͺ 71.54 | π§ 12.34 | 103 | CodeLLaMA Instruct 13B Q5_K_M |
245 | 251 | 182 | π€ͺ 63.94 | π 21.87 | 111 | Saiga 13B Q5_1 |
246 | 203 | 242 | π€ 72.58 | π§ 14.85 | β» 78 | Metharme 13B Q5_1 |
247 | 219 | 223 | π€ͺ 69.53 | π§ 14.42 | β» 100 | Pandalyst V1.0 13B Q5_K_M |
248 | 255 | 180 | π€ͺ 63.36 | πΆ 23.35 | 103 | WizardLM Uncensored 13B Q5_K_M |
249 | 228 | 213 | π€ͺ 67.57 | π 17.74 | 101 | WizardLM V1.1 13B Q5_K_M |
250 | 250 | 191 | π€ͺ 63.94 | π§ 16.40 | 114 | CodeLLaMA Python 13B Q5_K_M |
251 | 229 | 217 | π€ͺ 67.40 | π 18.80 | β» 91 | Asclepius 13B Q5_K_M |
252 | 205 | 248 | π€ 72.41 | π§ 11.35 | β» 82 | Manticore Chat Pyg Guanaco 13B Q4_K_M |
253 | 214 | 238 | π€ͺ 71.26 | π§ 14.02 | β» 91 | Vicuna 1.3 German 13B Q5_K_M |
254 | 246 | 206 | π€ͺ 64.46 | π§ 14.41 | 111 | CodeLLaMA 13B Q5_K_M |
255 | 237 | 221 | π€ͺ 66.24 | π§ 15.54 | β» 96 | Vicuna 1.3 13B Q5_1 |
256 | 259 | 201 | π€ͺ 62.62 | π 21.82 | β» 96 | Wizard Vicuna Uncensored 13B Q5_K_M |
257 | 240 | 224 | π€ͺ 65.38 | π§ 15.67 | β» 94 | WizardLM 1.0 13B Q5_K_M |
258 | 241 | 233 | π€ͺ 65.21 | π§ 16.20 | β» 88 | Based 13B Q5_K_M |
259 | 226 | 252 | π€ͺ 67.86 | π§ 10.49 | β» 73 | Nexus Raven 13B Q5_K_M |
260 | 249 | 240 | π€ͺ 64.00 | π§ 14.32 | β» 86 | WizardLM WizardCoder Python V1.0 13B Q4_K_S |
261 | 263 | 227 | π€ͺ 59.56 | π§ 14.46 | β» 95 | Wizard Vicuna 13B Q5_K_M |
262 | 233 | 263 | π€ͺ 66.42 | π§ 8.22 | β» 47 | Dolphin LLaMA 13B Q5_K_M |
263 | 262 | 235 | π€ͺ 60.37 | π§ 14.37 | β» 90 | Vicuna CoT 13B Q5_K_M |
264 | 257 | 266 | π€ͺ 63.31 | π§ 4.42 | β» 71 | Scarlett 13B Q5_K_M |
265 | 256 | 268 | π€ͺ 63.36 | π§ 6.06 | β» 38 | Pygmalion 13B Q5_1 |
266 | 266 | 258 | π€ͺ 57.14 | π§ 10.84 | β» 50 | Taiwan LLaMA V1.0 13B Q5_K_M |
267 | 268 | 262 | π€ͺ 56.57 | π§ 9.46 | β» 44 | Taiwan LLaMA v1.0 13B Q5_K_M |
268 | 267 | 264 | π€ͺ 56.91 | π§ 7.65 | β» 60 | BigTranslate 13B Q4_K_M |
269 | 270 | 261 | π€ͺ 53.46 | π§ 8.80 | β» 60 | Komt LLaMA-2 13B Q5_K_M |
270 | 269 | 271 | π€ͺ 53.92 | π§ 1.27 | β» 11 | LMSYS Vicuna 1.5 16k 13B Q5_1 (ext. context maybe broken) |
271 | 274 | 270 | π€ͺ 50.12 | π§ 1.79 | β» 45 | Stable Vicuna 13B Q5_K_M |
272 | 271 | 274 | π€ͺ 52.42 | π§ 0.00 | β» 0 | EverythingLM V2 16K 13B Q4_K_S (ext. context maybe broken) |
273 | 275 | 272 | π€ͺ 47.70 | π§ 0.62 | β» 7 | Chatxu (L2?) 13B Q4_0 |
274 | 272 | 276 | π€ͺ 52.42 | π§ 0.00 | β» 0 | LLongMA 2 13B Q5_1 (ext. context maybe broken) |
275 | 273 | 275 | π€ͺ 50.81 | π§ 0.00 | β» 0 | EverythingLM 16K 13B Q5_K_M (ext. context maybe broken) |
276 | 276 | 273 | π€ͺ 47.58 | π§ 0.00 | β» 0 | Dans CreepingSenseOfDoom 13B Q5_K_M |
20B to 33B Models
2023-11-01 Benchmark Re-Run V3: I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html
- See Ranking Changelog to see which GGUF/GGML Models were added
- Benchmark Results as CSV - Timestamp 20231004_193917
Rank | ALC-IQ Rank | ERP Rank | ALC-IQ | ERP Score | ERP Var Score | Model |
---|---|---|---|---|---|---|
π₯ 1 | π 1 | π 4 | βπ§ 92.74 | πΆπΆ 30.23 | β 144 | π₯ππ MLewd ReMM Chat 20B Q5_K_M |
π₯ 2 | π 5 | π 3 | βπ§ 91.53 | πΆπΆ 29.62 | β 148 | π₯ππ MLewd ReMM Chat Inverted 20B Q5_K_M |
π₯ 3 | π 4 | 13 | βπ§ 91.65 | πΆπΆ 27.81 | β³ 132 | π₯π MXLewd 20B Q5_K_M |
π₯ 4 | 8 | π 10 | βπ§ 90.44 | πΆ 25.27 | β 148 | π₯π Emerhyst 20B Q5_K_M |
π₯ 5 | π 3 | 19 | βπ§ 92.17 | πΆπΆ 27.77 | β³ 127 | π₯π Airoboros 2.1 33B Q4_K_M |
π₯ 6 | 18 | π 6 | π 88.54 | πΆπΆ 32.89 | β 136 | π₯π MM ReMM 20B Q5_K_M |
π₯ 7 | 13 | 15 | π§ 89.57 | πΆ 24.24 | β 146 | π₯ Huginn 5 Prototype 19B Q4_K_S |
π₯ 8 | 9 | 25 | βπ§ 90.32 | πΆ 24.50 | β³ 134 | π₯ Airoboros GPT4 1.4 33B Q4_K_M |
π₯ 9 | 21 | π 11 | π 88.02 | πΆπΆ 27.55 | β 141 | π₯π Enterredaas 33B Q4_1 |
π₯ 10 | 16 | 17 | π§ 88.71 | πΆπΆ 27.40 | β³ 132 | π₯ Airochronos 33B Q5_K_M |
π₯ 11 | 22 | 14 | π 85.94 | πΆ 24.43 | β 146 | π₯ LLaMA-2 BlockTri Frankenstein 22B Q4_K_M |
π₯ 12 | 24 | π 12 | π 85.43 | πΆ 25.92 | β 142 | π₯π Lazarus 30B Q4_K_M |
π₯ 13 | 14 | 26 | π§ 89.17 | πΆ 23.94 | β³ 134 | π₯ LLaMA SuperCOT 30B Q4_K_M |
π₯ 14 | 23 | 16 | π 85.77 | πΆ 25.68 | β 139 | π₯ Chronoboros 33B Q5_K_M |
π₯ 15 | π 2 | 44 | βπ§ 92.57 | π 22.62 | 115 | π₯π SuperPlatty 30B Q4_K_M |
π₯ 16 | 40 | π 2 | π€ 82.55 | πΆπΆ 35.79 | β 153 | π₯π COTHuginn 4.5 19B Q5_K_M |
π₯ 17 | 7 | 42 | βπ§ 90.73 | π 22.27 | 121 | π₯ Platypus 2 70B Q2_K |
π₯ 18 | 38 | π 5 | π€ 82.83 | πΆπΆ 27.42 | β 147 | π₯π LLaMA 2 Ari03 28B (link broken) Q5_1 |
π₯ 19 | 17 | 33 | π§ 88.71 | πΆ 26.86 | 117 | π₯ Airoboros GPT4 2.0 33B Q5_K_M |
π₯ 20 | 12 | 39 | π§ 89.75 | π 22.10 | β³ 122 | π₯ GPlatty 30B Q4_K_M |
π₯ 21 | 29 | 21 | π 84.62 | πΆπΆ 28.12 | β³ 123 | π₯ Saiga 30B Q5_1 |
22 | 11 | 46 | βπ§ 89.92 | πΆ 25.07 | β» 105 | Airoboros GPT4 m2.0 33B Q5_K_M |
23 | 33 | 20 | π 83.47 | πΆ 26.87 | β³ 130 | Fin LLaMA 33B Q4_K_M |
24 | 28 | 27 | π 84.62 | πΆπΆ 28.68 | 115 | CAMEL Combined Data 33B Q4_K_M |
25 | 26 | 31 | π 84.85 | πΆπΆ 27.18 | 117 | Vigogne Instruct 33B Q4_K_M |
26 | 27 | 32 | π 84.79 | π 23.34 | β³ 128 | LLaMA-2 Frankensteined 22B Q4_K_M |
27 | π 6 | 58 | βπ§ 90.84 | π§ 17.17 | β» 96 | π Platypus 30B Q4_K_M |
28 | 35 | 24 | π€ 83.12 | π 22.96 | β 143 | Guanaco 33B Q4_K_M |
29 | 10 | 54 | βπ§ 90.09 | π§ 18.98 | β» 106 | LLaMA 30B Q5_K_M |
30 | 15 | 49 | π§ 89.00 | π 22.41 | 114 | VicUnlocked LoRA 30B Q4_K_M |
31 | 41 | 18 | π€ 82.55 | πΆπΆ 31.32 | β³ 122 | Carl 33B Q4_K_M |
32 | 54 | π 7 | π€ͺ 75.81 | πΆ 25.84 | β 156 | π Bacchus (L2*) 22B Q4_0 |
33 | 60 | π 1 | π€ͺ 73.44 | πΆπΆ 37.23 | β 166 | π MythoMax 33B Q4_K_M |
34 | 42 | 23 | π€ 82.14 | πΆπΆ 29.65 | 119 | Frank Uncensored 33B Q5_K_M |
35 | 25 | 45 | π 85.14 | π 23.68 | 111 | Lazarus Instruct PL 30B Q4_1 |
36 | 34 | 37 | π€ 83.35 | πΆ 26.54 | β» 109 | WizardLM Uncensored 30B Q5_K_M |
37 | 47 | 22 | π€ 79.49 | π 22.11 | β 147 | Spicyboros C 2.2 34B Q4_K_M |
38 | 59 | π 9 | π€ͺ 73.79 | πΆπΆ 29.12 | β 136 | π Wizard Vicuna LLaMA-2 22B Q4_K_M |
39 | 39 | 36 | π€ 82.72 | πΆ 25.67 | 116 | Vicuna v1.3 33B Q4_K_M |
40 | 19 | 60 | π 88.48 | π§ 9.54 | β» 77 | Upstage LLaMA Instruct 30B Q5_K_M |
41 | 45 | 29 | π€ 80.07 | π 22.99 | β³ 134 | CodeLLaMA 34B Q4_K_M |
42 | 63 | π 8 | π€ͺ 72.47 | πΆπΆ 27.72 | β 142 | π Daydreamer v3 22B Q5_K_M |
43 | 20 | 61 | π 88.13 | π§ 11.63 | β» 71 | Hippogriff 30B Q4_K_M |
44 | 32 | 47 | π 83.87 | π 23.12 | 111 | Tulu 30B Q5_K_M |
45 | 30 | 51 | π 84.27 | π§ 18.75 | 112 | Dans PersonalityEngine 30B Q4_1 |
46 | 50 | 28 | π€ͺ 78.92 | π 21.98 | β 141 | Huginn Prototype 22B Q4_K_M |
47 | 43 | 38 | π€ 81.16 | πΆ 25.16 | 114 | WizardLM V1.0 Uncensored 33B Q4_K_M |
48 | 31 | 55 | π 83.99 | π§ 18.92 | β» 86 | Based 30B Q5_K_M |
49 | 52 | 30 | π€ͺ 78.34 | πΆ 24.40 | β³ 128 | Wizard Vicuna Uncensored 30B Q5_K_M |
50 | 49 | 35 | π€ 79.15 | π 21.51 | β³ 131 | CodeLLaMA Python 34B Q4_K_M |
51 | 44 | 41 | π€ 80.18 | π 20.68 | β³ 126 | Chronos 33B Q5_K_M |
52 | 36 | 52 | π€ 83.06 | π§ 19.81 | β» 106 | Epsilon 30B Q4_K_M |
53 | 37 | 59 | π€ 83.06 | π§ 8.47 | β» 101 | MindFlay 22B Q4_0 |
54 | 56 | 40 | π€ͺ 74.48 | π 20.11 | β³ 126 | Airoboros C 2.1 34B Q5_K_M |
55 | 46 | 53 | π€ 79.55 | π§ 18.91 | β» 110 | Airoboros C 2.2 34B Q4_K_M |
56 | 62 | 34 | π€ͺ 73.16 | πΆ 25.26 | 119 | LLaMA 2 DayDreamer V1 22B Q5_K_M |
57 | 57 | 43 | π€ͺ 74.48 | π 20.11 | β³ 126 | Airoboros C 2.1b 34B Q5_K_M |
58 | 53 | 50 | π€ͺ 76.04 | π§ 17.46 | 121 | CodeLLaMA Instruct 34B Q4_K_M |
59 | 48 | 56 | π€ 79.38 | π§ 13.38 | β» 103 | Synthia v1.2 34B Q4_K_M |
60 | 61 | 48 | π€ͺ 73.21 | π 20.97 | 117 | Phind CodeLLaMA v1 34B Q4_K_S |
61 | 55 | 57 | π€ͺ 74.83 | π§ 14.03 | β» 102 | Airobors C 2.1 34B Q4_K_M |
62 | 51 | 62 | π€ͺ 78.63 | π§ 5.38 | β» 70 | Scarlett 33B Q4_K_M |
63 | 58 | 65 | π€ͺ 74.19 | π§ 4.61 | β» 48 | Samantha 1.11 CodeLLaMA 34B Q4_K_M |
64 | 64 | 63 | π€ͺ 60.08 | π§ 5.86 | β» 69 | BrainToast 20B Q5_K_M |
65 | 66 | 64 | π€ͺ 51.15 | π§ 2.59 | β» 56 | WizardLM 30B Q4_K_M |
66 | 65 | 66 | π€ͺ 52.42 | π§ 0.00 | β» 0 | Airoboros GPT4 1.4 SuperHOT 8K 33B Q4_K_M (ext. context maybe broken) |
About Extended Context (8K, 16K, 32K)
As you may have noticed, there are a few models currently (2023-08-09) that have a bad ALC-IQ and even worse ERP-Score. A few of these models are:
- LLaMA-2 32K 7B
- LMSYS LongChat 1.5 32k 7B
- LLongMA 2 7B
- Hermes LLongMA 2 8K (L2) 7B
And a few others. The reason for this is simple: The GGML file format is a mess. And even after the new GGUF file format arrived, people sometimes seem to fail to properly quantize the context extended models into a GGUF file. The benchmark does sometimes not have proper results for these models because:
- The GGUF file creator messed up somehow (for instance: converted a GGML file to GGUF without the proper rope scaling settings).
- For GGML Files:
- A special setting is required in llama.cpp to enable compatibility with these models. Called
--rope-freq-base
and--rope-freq-scale
. These need to be set to the right magic values corresponding to the model at hand. - Determining these magic ROPE values is not hard, if they were properly documented. But only few pages on huggingfaces that provide GGML file quantizations document these. TheBloke really tries hard, but sometimes even the original model uploaders don't provide any information about the right values.
- And most importantly: It would require carrying meta data out of band along with each file for me. I don't have the time figuring out the right values. And I believe most users won't ever bother either.
- There are also other important options which are not mentioned yet, but are crucial for some GGML files to work properly:
--gqa
(grouped-query attention factor) is one of these, it is required to set to the magic value8
for LLaMA 2 70B to work.--rms-norm-eps
is an epsilon value for inference of the models. This value is different bewettn LLaMA 1 (1e-6
) and LLaMA 2 (1e-5
). It makes a difference in how well either model works. The original default1e-6
was actually replaced recently by5e-6
which is half way between the both values, and suppsedly should work fine. But in my own tests I saw quite some variance in the performance of the quantized GGML models, which were kind of contradicting to what was stated on llama.cpp. But I decided to not dig further, because there is still too much sampling randomness involved in the ALC-IQ (beta). Which I will eventually fix.
- A special setting is required in llama.cpp to enable compatibility with these models. Called
Ranking Changelog
- 2023-11-01 Benchmark Re-Run V3
I currently run a completely new benchmark. Until I get around to update this page, you may find the most recent results here: http://ayumi.m8geil.de/ayumi_bench_v3_results.html - 2023-10-04 V36
Size | Rank | Model |
---|---|---|
3B-7B | 1 / 171 | π₯ππ(βπ§ βπΆπΆ) Mistral Claude Chat 7B Q5_K_M |
3B-7B | 2 / 171 | π₯ππ(βπ§ β) Mistral ClaudeLimaRP v3 7B Q5_K_M |
3B-7B | 3 / 171 | π₯ππ(βπ§ βπΆπΆ) Mistral RP 0.1 7B Q5_K_M |
3B-7B | 4 / 171 | π₯ππ(βπ§ βπΆπΆ) Synthia v1.3 7B Q5_K_M |
3B-7B | 5 / 171 | π₯ππ(βπ§ βπΆπΆ) Samantha Mistral 7B Q5_K_M |
3B-7B | 6 / 171 | π₯ππ(βπ§ β) Mistral v0.1 7B Q5_K_M |
3B-7B | 8 / 171 | π₯ππ(βπ§ βπΆπΆ) PetrolLM 7B Q5_K_M |
3B-7B | 13 / 171 | π₯π(βπΆπΆ) MistRP v1.1 7B Q8_0 |
3B-7B | 17 / 171 | π₯π(βπ§ ) Kimiko Mistral 7B Q5_K_M |
3B-7B | 18 / 171 | π₯π(βπΆπΆ) Mistral Instruct v0.1 7B Q5_K_M |
3B-7B | 42 / 171 | π₯π(βπΆπΆ) Samantha Mistral Instruct 7B Q5_K_M |
3B-7B | 81 / 171 | LLaMA-2 Mistral 7B Q5_K_M |
3B-7B | 91 / 171 | Medusa 1.3 7B Q5_K_M |
3B-7B | 106 / 171 | π(β) Deacon 3B Q5_0 |
3B-7B | 123 / 171 | Leo Hessianai Chat 7B Q5_K_M |
3B-7B | 147 / 171 | Pandalyst V1.1 7B Q5_K_M |
13B | 6 / 276 | π₯ππ(βπ§ β) ReMM Mistral 13B Q5_K_M |
13B | 8 / 276 | π₯ππ(βπ§ βπΆπΆ) Amethyst 13B Q5_K_M |
13B | 10 / 276 | π₯ππ(βπ§ πΆπΆ) Amethyst Mistral 13B Q4_K_S |
13B | 13 / 276 | π₯ππ(βπ§ πΆπΆ) MythoMakiseMerged 13B Q5_K_M |
13B | 60 / 276 | π₯π(βπ§ ) Emerhyst 13B Q5_K_M |
13B | 68 / 276 | π₯π(βπΆπΆ) Athena v3 13B Q5_K_M |
13B | 73 / 276 | π₯ Mistral PetroLimaRP v3 12B Q5_K_M |
13B | 78 / 276 | π₯ MegaMix S1 13B Q5_K_M |
13B | 85 / 276 | π₯(β) GradientPutri MegaMix S1 13B Q5_K_S |
13B | 106 / 276 | π(βπΆπΆ) MegaMix T1 13B Q5_K_M |
13B | 107 / 276 | Stheno 1.8 13B Q5_K_M |
13B | 126 / 276 | MegaMix A1 13B Q5_K_M |
13B | 133 / 276 | LLaMA-2 Mistral 13B Q5_K_M |
13B | 142 / 276 | UltraLM v2.0 13B Q5_K_M |
13B | 199 / 276 | Dans MythsteryModel 13B Q5_K_M |
13B | 247 / 276 | Pandalyst V1.0 13B Q5_K_M |
13B | 259 / 276 | Nexus Raven 13B Q5_K_M |
20B-34B | 4 / 66 | π₯π(βπ§ β) Emerhyst 20B Q5_K_M |
- 2023-09-25 V35
Size | Rank | Model |
---|---|---|
3B-7B | 34 / 155 | π₯π(βπΆπΆ) Wizard Vicuna Uncensored 7B Q5_K_M |
3B-7B | 36 / 155 | π₯π(βπΆπΆ) Airoboros GPT4 1.4.1 7B Q5_K_M |
3B-7B | 42 / 155 | π₯π(πΆπΆ) Frank Uncensored 7B Q5_K_M |
3B-7B | 49 / 155 | π₯π(πΆπΆ) WizardLM V1.0 Uncensored 7B Q5_K_M |
3B-7B | 52 / 155 | π₯ Airoboros L2 2.2.1 7B Q5_K_M |
3B-7B | 53 / 155 | π(βπΆπΆ) Guanaco 7B Q5_K_M |
3B-7B | 66 / 155 | (πΆπΆ) Xwin LM V0.1 7B Q5_K_M |
3B-7B | 112 / 155 | ALMA Pretrain 7B Q5_K_M |
3B-7B | 113 / 155 | (πΆπΆ) WizardLM Uncensored 7B Q5_K_M |
3B-7B | 115 / 155 | Vicuna CoT 7B Q5_K_M |
3B-7B | 123 / 155 | Tulu 7B Q5_K_M |
3B-7B | 126 / 155 | MAmmoTH 7B Q5_K_M |
3B-7B | 129 / 155 | Gorilla 7B Q5_K_M |
3B-7B | 134 / 155 | Based 7B Q5_K_M |
3B-7B | 149 / 155 | WizardLM 7B Q5_K_M |
3B-7B | 151 / 155 | TinyLLaMA Chat v0.2 1B Q5_K_M |
13B | 13 / 259 | π₯ππ(βπ§ πΆπΆ) ReMM v2.2 13B Q5_K_M |
13B | 16 / 259 | π₯π(βπ§ βπΆπΆ) Athena v2 13B Q5_K_M |
13B | 33 / 259 | π₯π(β) ZettaPi 13B Q5_K_M |
13B | 34 / 259 | π₯(β) Airoboros L2 2.2.1 13B Q5_K_M |
13B | 65 / 259 | π₯(β) Stheno Chat 13B Q5_K_M |
13B | 66 / 259 | π₯(πΆπΆ) Airoboros GPT4 1.4.1 13B Q5_K_M |
13B | 68 / 259 | π₯π(βπ§ ) Inkbot 4k 13B Q4_K_M |
13B | 93 / 259 | MXLewdMini 13B Q5_K_M |
13B | 115 / 259 | (πΆπΆ) Frank Uncensored 13B Q5_K_M |
13B | 127 / 259 | EverythingLM V3 16K 13B Q5_K_M |
13B | 134 / 259 | (β) Dans RetroRodeo 13B Q5_K_M |
13B | 150 / 259 | ALMA Pretrain 13B Q5_K_M |
13B | 157 / 259 | LLaMA SuperCOT 13B Q5_K_M |
13B | 158 / 259 | (β) MAmmoTH 13B Q5_K_M |
13B | 163 / 259 | (β) Chronos Hermes 13B Q5_K_M |
13B | 170 / 259 | (πΆπΆ) MythoBoros 13B Q5_K_M |
13B | 179 / 259 | Guanaco 13B Q5_K_M |
13B | 181 / 259 | Manticore 13B Q5_K_M |
13B | 183 / 259 | MyhtoLogic 13B Q5_K_M |
13B | 185 / 259 | Chronos WizardLM UC SCOT ST 13B Q5_K_M |
13B | 186 / 259 | (β) Chronos 13B Q5_K_M |
13B | 191 / 259 | WizardLM V1.0 Uncensored 13B Q5_K_M |
13B | 193 / 259 | Chimera 13B Q5_K_M |
13B | 197 / 259 | CAMEL Combined Data 13B Q5_K_M |
13B | 198 / 259 | Minotaur 13B Q5_K_M |
13B | 201 / 259 | Tulu 13B Q5_K_M |
13B | 204 / 259 | Hypermantis 13B Q5_K_M |
13B | 212 / 259 | WizardMega 13B Q5_K_M |
13B | 215 / 259 | Manticore Chat Pyg 13B Q5_K_M |
13B | 224 / 259 | CAMEL Role Playing Data 13B Q5_K_M |
13B | 225 / 259 | BlueMethod 13B Q5_K_M |
13B | 227 / 259 | Ouroboros 13B Q5_K_M |
13B | 231 / 259 | WizardLM V1.1 13B Q5_K_M |
13B | 233 / 259 | WizardLM Uncensored 13B Q5_K_M |
13B | 240 / 259 | WizardLM 1.0 13B Q5_K_M |
13B | 241 / 259 | Wizard Vicuna Uncensored 13B Q5_K_M |
13B | 242 / 259 | Based 13B Q5_K_M |
13B | 245 / 259 | Wizard Vicuna 13B Q5_K_M |
13B | 246 / 259 | Vicuna CoT 13B Q5_K_M |
13B | 254 / 259 | Stable Vicuna 13B Q5_K_M |
20B-34B | 1 / 65 | π₯ππ(βπ§ βπΆπΆ) MLewd ReMM Chat 20B Q5_K_M |
20B-34B | 2 / 65 | π₯ππ(βπ§ βπΆπΆ) MLewd ReMM Chat Inverted 20B Q5_K_M |
20B-34B | 3 / 65 | π₯ππ(βπ§ πΆπΆ) MXLewd 20B Q5_K_M |
20B-34B | 5 / 65 | π₯π(βπΆπΆ) MM ReMM 20B Q5_K_M |
20B-34B | 11 / 65 | π₯π(β) Lazarus 30B Q4_K_M |
20B-34B | 12 / 65 | π₯ LLaMA SuperCOT 30B Q4_K_M |
20B-34B | 14 / 65 | π₯π(βπ§ ) SuperPlatty 30B Q4_K_M |
20B-34B | 19 / 65 | π₯(βπ§ ) GPlatty 30B Q4_K_M |
20B-34B | 22 / 65 | Fin LLaMA 33B Q4_K_M |
20B-34B | 24 / 65 | (πΆπΆ) CAMEL Combined Data 33B Q4_K_M |
20B-34B | 26 / 65 | (β) Guanaco 33B Q4_K_M |
20B-34B | 28 / 65 | π(βπ§ ) Platypus 30B Q4_K_M |
20B-34B | 29 / 65 | VicUnlocked LoRA 30B Q4_K_M |
20B-34B | 33 / 65 | (πΆπΆ) Frank Uncensored 33B Q5_K_M |
20B-34B | 36 / 65 | WizardLM Uncensored 30B Q5_K_M |
20B-34B | 40 / 65 | Upstage LLaMA Instruct 30B Q5_K_M |
20B-34B | 42 / 65 | Hippogriff 30B Q4_K_M |
20B-34B | 43 / 65 | Tulu 30B Q5_K_M |
20B-34B | 46 / 65 | WizardLM V1.0 Uncensored 33B Q4_K_M |
20B-34B | 47 / 65 | Based 30B Q5_K_M |
20B-34B | 48 / 65 | Wizard Vicuna Uncensored 30B Q5_K_M |
20B-34B | 51 / 65 | Epsilon 30B Q4_K_M |
20B-34B | 63 / 65 | BrainToast 20B Q5_K_M |
20B-34B | 64 / 65 | WizardLM 30B Q4_K_M |
- 2023-09-18 V34
Size | Rank | Model |
---|---|---|
3B-7B | 16 / 143 | π₯π(βπ§ πΆπΆ) Kuchiki 1.1 7B Q5_K_M |
3B-7B | 51 / 143 | Saiga 2 7B Q5_K |
3B-7B | 117 / 143 | WizardCoder Python V1.0 7B Q5_K_M |
3B-7B | 140 / 143 | PY007 TinyLLaMA Chat v0.2 1B Q8_0 |
13B | 23 / 230 | π₯π(βπ§ β) Magpie 13B Q5_K_M |
13B | 25 / 230 | π₯(βπ§ β) MLewd Chat 13B Q5_K_M |
13B | 26 / 230 | π₯π(πΆπΆ) Pygmaltion 2 SuperCOT weighted 13B Q5_K_M |
13B | 74 / 230 | π₯ Saiga 2 13B Q5_K |
13B | 140 / 230 | OpenOrca STX 13B Q5_K_M |
13B | 143 / 230 | CalliopeDS 13B Q5_K_M |
13B | 158 / 230 | ChatAYT Lora Assamble Marcoroni 13B Q5_K_M |
13B | 223 / 230 | Taiwan LLaMA v1.0 13B Q5_K_M |
20B-34B | 1 / 47 | π₯ππ(βπ§ βπΆπΆ) MLewd ReMM Chat 20B Q5_K_M |
20B-34B | 2 / 47 | π₯π(βπ§ πΆπΆ) MLewd ReMM Chat Inverted 20B Q5_K_M |
20B-34B | 17 / 47 | Vigogne Instruct 33B Q4_K_M |
20B-34B | 27 / 47 | Vicuna v1.3 33B Q4_K_M |
20B-34B | 37 / 47 | Airoboros C 2.2 34B Q4_K_M |
20B-34B | 42 / 47 | Synthia v1.2 34B Q4_K_M |
- 2023-09-15 V33
Size | Rank | Model |
---|---|---|
3B-7B | 1 / 140 | π₯ππ(βπ§ πΆπΆ) Kuchiki 7B Q5_K_M |
3B-7B | 26 / 140 | π₯(β) LLaMA-2 Coder 7B Q5_K_M |
3B-7B | 53 / 140 | LLaMA-2 LoRA Assemble 7B Q5_K_M |
3B-7B | 134 / 140 | OpenLLaMA Odia 3B Q5_1 |
13B | 1 / 225 | π₯ππ(βπ§ βπΆπΆ) MLewdBoros LRPSGPT 2Char 13B Q5_K_M |
13B | 20 / 225 | π₯π(βπ§ πΆπΆ) BerrySauce 13B Q5_K_M |
13B | 47 / 225 | π₯(β) MLewd Chat 13B Q5_K_M |
13B | 48 / 225 | π₯(πΆπΆ) Pygmalion 2 SuperCOT2 13B Q5_K_M |
13B | 62 / 225 | π₯(πΆπΆ) ReMM v1 LRPSGPT 2Char 13B Q5_K_M |
13B | 100 / 225 | LLaMA-2 Chat AYT 13B Q5_K_M |
13B | 116 / 225 | LLaMA-2 LoRA Assemble 13B Q5_K_M |
13B | 225 / 225 | Dans CreepingSenseOfDoom 13B Q5_K_M |
20B-34B | 20 / 41 | (β) Spicyboros C 2.2 34B Q4_K_M |
- 2023-09-13 V32
Size | Rank | Model |
---|---|---|
3B-7B | 40 / 137 | π₯ Airoboros 2.2 7B Q5_K_M |
3B-7B | 108 / 137 | LLaMA-2 Silverlin. Verilog 7B Q4_K_M |
13B | 12 / 217 | π₯ππ(βπ§ πΆπΆ) OpenRP 13B Q5_K_M |
13B | 18 / 217 | π₯π(βπ§ πΆπΆ) MLewdBoros SuperCOT 13B Q5_K_M |
13B | 23 / 217 | π₯π(βπ§ ) ReMM v2 Kimiko v2 13B Q5_K_M |
13B | 32 / 217 | π₯(β) Airoboros 2.2 13B Q5_K_M |
13B | 37 / 217 | π₯ UndiMix v4 13B Q5_K_M |
13B | 47 / 217 | π₯(βπ§ πΆπΆ) OpenRP SuperCOT 13B Q5_K_M |
13B | 50 / 217 | π₯(πΆπΆ) Unholy v1.1 13B Q5_K_M |
20B-34B | 26 / 41 | Spicyboros C 2.2 34B Q4_K_M |
- 2023-09-12 V31
Size | Rank | Model |
---|---|---|
3B-7B | 35 / 135 | π₯ Marcoroni 7B Q5_K_M |
3B-7B | 104 / 135 | Chinese LLaMA-2 7B Q5_K |
13B | 4 / 210 | π₯ππ(βπ§ βπΆπΆ) Pygmalion 2 SuperCOT 13B Q5_K_M |
13B | 7 / 210 | π₯ππ(βπ§ βπΆπΆ) AppleSauce 13B Q5_K_M |
13B | 14 / 210 | π₯π(βπ§ πΆπΆ) ReMM v2.1 13B Q5_K_M |
13B | 19 / 210 | π₯π(βπΆπΆ) Unholy v1 10L 13B Q5_K_M |
13B | 20 / 210 | π₯π(βπΆπΆ) Unholy v1 13B Q5_K_M |
13B | 21 / 210 | π₯π(βπΆπΆ) Unholy v1 12L 13B Q5_K_M |
13B | 35 / 210 | π₯π(βπ§ ) Huginn v1.2 13B Q5_K_M |
13B | 55 / 210 | π₯π(βπ§ ) LlongOrca 16K 13B Q5_K_M |
13B | 62 / 210 | π₯π(βπΆπΆ) Huginn v3 13B Q5_K_M |
13B | 84 / 210 | π(βπ§ ) LLaMA-2 Ensemble v6 13B Q5_K_M |
13B | 105 / 210 | Marcoroni 13B Q5_K_M |
13B | 125 / 210 | LLaMA-2 Ensemble v5 13B Q5_K_M |
13B | 132 / 210 | OpenOrca Platypus 2 13B Q5_K_M |
13B | 154 / 210 | JanniesBasedLigma 13B Q5_K_M |
13B | 155 / 210 | Barcenas 13B Q5_K_M |
13B | 157 / 210 | Tsukasa Limarp 13B Q5_K_M |
13B | 174 / 210 | Chinese Alpaca 2 13B Q5_K |
13B | 183 / 210 | Chinese LLaMA-2 13B Q5_K |
- 2023-09-10 V30
Size | Rank | Model |
---|---|---|
3B-7B | 20 / 134 | π₯ Medusa 1.1 7B Q5_K_M |
3B-7B | 30 / 134 | π₯ LosslessMegaCoder Mini 7B Q5_K_M |
3B-7B | 37 / 134 | π₯π(βπ§ ) LLaMA-2 PeanutButter v19 R8 7B Q5_K_M |
3B-7B | 38 / 134 | π₯(βπ§ ) Befenghuang Vigogne 2 Chat 7B Q5_K_S |
3B-7B | 41 / 134 | π₯(β) Ganchengguang Yoko Japanse v0 7B Q5_K_S |
3B-7B | 42 / 134 | π₯ LlongOrca 16K 7B Q5_K_M |
3B-7B | 45 / 134 | π₯(πΆπΆ) Spicyboros 2.2 7B Q5_K_M |
3B-7B | 62 / 134 | (πΆπΆ) Airoboros GPT4 2.0 LLaMA-2 7B Q5_K_M |
3B-7B | 93 / 134 | (πΆπΆ) Chinese Alpaca 2 7B Q5_K_S |
3B-7B | 97 / 134 | Guanaco Uncensored 7B Q5_K_M |
3B-7B | 98 / 134 | (β) Mamba GPT v4 3B Q5_1 |
3B-7B | 102 / 134 | (πΆπΆ) Airoboros GPT4 m2.0 LLaMA-2 7B Q5_K_M |
13B | 2 / 195 | π₯ππ(βπ§ βπΆπΆ) MLewdBoros 13B Q5_K_M |
13B | 5 / 195 | π₯π(βπ§ βπΆπΆ) Spicyboros 2.2_2 13B Q5_K_M |
13B | 6 / 195 | π₯ππ(βπ§ βπΆπΆ) ReMM v2 13B Q5_K_M |
13B | 12 / 195 | π₯π(βπΆπΆ) MLewd v2-2 13B Q5_K_M |
13B | 14 / 195 | π₯π(βπ§ πΆπΆ) ReMM 0.65 SLERP 13B Q5_K_M |
13B | 15 / 195 | π₯π(βπ§ β) Stheno 1.3 13B Q5_K_M |
13B | 18 / 195 | π₯π(βπΆπΆ) Teknium OpenHermes 13B Q5_K_S |
13B | 19 / 195 | π₯(βπ§ πΆπΆ) ReMM v2 Variant 13B Q5_K_M |
13B | 23 / 195 | π₯(βπ§ β) Spicyboros 2.2 13B Q4_K_M |
13B | 24 / 195 | π₯π(βπΆπΆ) Stheno Inverted 1.2 13B Q5_K_M |
13B | 30 / 195 | π₯π(βπΆπΆ) Holomax 13B Q5_K_M |
13B | 57 / 195 | π₯(β) Guanaco Uncensored 13B Q5_K_M |
13B | 60 / 195 | π₯ Chronos Hermes v2 13B Q5_K_M |
13B | 64 / 195 | π₯π(βπΆπΆ) Airoboros 2.1 YaRN 64K 13B Q5_K_M |
13B | 72 / 195 | (πΆπΆ) Airoboros GPT4 2.0 LLaMA-2 13B Q5_K_M |
13B | 91 / 195 | Nous Hermes LLaMA-2 13B Q5_K_M |
13B | 125 / 195 | Stheno 1.2 13B Q5_K_M |
13B | 128 / 195 | (πΆπΆ) Airoboros GPT4 m2.0 LLaMA-2 13B Q5_K_M |
13B | 180 / 195 | Based 13B Q5_K_M |
13B | 187 / 195 | Taiwan LLaMA v1.0 13B Q5_K_M |
20B-34B | 9 / 40 | π₯π(βπΆπΆ) COTHuginn 4.5 19B Q5_K_M |
20B-34B | 20 / 40 | π(βπΆπΆ) MythoMax 33B Q4_K_M |
20B-34B | 28 / 40 | Based 30B Q4_K_M |
- 2023-09-08 V29
- The ERP Scores (ERP Score and ERP Variety Score) were completely reworked: The count of lewd words is now related to the total number of lewd words in a response. And the ERP Score is now the average of these and not the median anymore. And the ERP Variety Score was added, which tries to catch the erotic creative lewd word knowledge of a model. The ERP Rank is computed by slightly biasing towards the new ERP Variety Score.
- Separate ranks for The ALC-IQ and the ERP Scores were introduced. The resulting model rank is now determined by a weighted sum of the ALC-IQ Rank and ERP Rank. Slightly biased towards the ALC-IQ Rank.
- GGUF results replace the GGML results of a model now. Please note, that sometimes this can result in a model gaining of loosing ranks in the table. This is sadly just the nature of the foating point quantization. It just shows how similar these models and fine tunes are in the core and how sensitive this benchmark is.
- New Symbols were added to signal good ALC-IQ Ranks (π) and good ERP Ranks (π). The medals (π₯, π₯ and π₯) are assigned to multiple ranks, because this ranking can't tell you ultimately which model is actually the best for you. That is not just because there are many known flaws in this benchmark, but also because a large part of your role play experience will also depend on: your expectations, the character card, the prompt annotations and the sampler settings you use.
- And the following models were added on top of that to the table:
- Benchmark Results as CSV - Timestamp 20230908_203426
Size Rank Model 3B-7B 5 / 123 π₯ππ(βπ§ πΆπΆ) Zarablend 7B Q5_K_M 3B-7B 9 / 123 π₯π(βπ§ β) Zarafusionex 1.1 7B Q5_K_M 3B-7B 10 / 123 π₯π(βπ§ ) Hermes LimaRP 7B Q5_K_M 3B-7B 12 / 123 π₯(βπ§ ) Krakowiak 7B Q4_K_M 3B-7B 17 / 123 π₯(πΆπΆ) Zarablend MX 7B Q5_K_M 3B-7B 21 / 123 π₯ Typly Pigeon 7B Q4_K_M 3B-7B 46 / 123 Kimiko 7B Q5_K_M 3B-7B 51 / 123 (πΆπΆ) Luna AI LLaMA-2 Uncensored 7B Q5_K_M 3B-7B 58 / 123 Pygmalion 2 7B Q5_K_M 3B-7B 71 / 123 StableBeluga 7B Q5_K_M 13B 3 / 177 π₯ππ(βπ§ βπΆπΆ) Slerpeno 13B Q5_K_M 13B 4 / 177 π₯π(βπ§ βπΆπΆ) MLewd V2-1 015 13B Q4_K_S 13B 10 / 177 π₯π(βπΆπΆ) MLewd V2-1 13B Q5_K_M 13B 11 / 177 π₯π(βπ§ β) UndiMix v3 13B Q5_K_M 13B 13 / 177 π₯π(βπΆπΆ) MLewd V2-1 050 13B Q4_K_S 13B 15 / 177 π₯π(βπΆπΆ) MLewd v2 13B Q5_K_M 13B 20 / 177 π₯π(βπΆπΆ) ReMM Lion 13B Q5_K_M 13B 30 / 177 π₯(βπ§ ) StableBeluga 13B Q5_K_M 13B 32 / 177 π₯(β) Pygmalion 2 13B Q5_K_M 13B 38 / 177 π₯(πΆπΆ) Mythalion 13B Q5_K_M 13B 41 / 177 π₯(πΆπΆ) Fireflx v1.2 13B Q5_K_M 13B 45 / 177 π₯π(βπΆπΆ) ReMM S Kimiko v2 13B Q5_K_M 13B 60 / 177 (πΆπΆ) Thorns 13B Q5_K_M 13B 70 / 177 (βπ§ ) TerraMix 16K 13B Q5_K_M 13B 120 / 177 YuLan Chat 2 13B Q5_K_M 20B-34B 2 / 37 π₯π(βπ§ β) Huginn 5 Prototype 19B Q4_K_S 20B-34B 28 / 37 Airoboros C 2.1 34B Q5_K_M
- 2023-09-05 V28
- Changes: Removed the (L2) marker.
- There are still GGML results in my benchmark, I will keep them for now until ggml seems to be phased out completely eventually.
- Marking broken links in the table with "(link broken)"
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 25 / 125 | π§ / π | Tsukasa Limarp 7B (gguf) Q5_K_M |
3B-7B | 26 / 125 | π§ / π | ELYZA Jp LLaMA-2 7B (gguf) Q5_K_M |
3B-7B | 27 / 125 | βπ§ / π§ | MedLLama 7B (gguf) Q5_K_M |
3B-7B | 28 / 125 | βπ§ / π§ | LLaMA-2 7B (gguf) Q5_K_M |
3B-7B | 54 / 125 | βπ / π | ELYZA Jp LLaMA-2 Instruct 7B (gguf) Q5_K_M |
3B-7B | 57 / 125 | π / π | LLaMA-2 Galleon 7B (gguf) Q5_K_M |
3B-7B | 60 / 125 | π / π§ | Tsukasa 7B (gguf) Q5_K_M |
3B-7B | 62 / 125 | π / π§ | Vicuna v1.5 16K 7B (gguf) Q5_K_M |
3B-7B | 101 / 125 | βπ€ͺ / πΆ | Vicuna v1.5 7B (gguf) Q5_K_M |
13B | 2 / 170 | βπ§ / πΆπΆ | MythoMix 13B (gguf) Q5_K_M |
13B | 6 / 170 | βπ§ / πΆπΆ | MythoMax 13B (gguf) Q5_K_M |
13B | 7 / 170 | βπ§ / πΆπΆ | ReMM SLERP 13B (gguf) Q5_K_M |
13B | 14 / 170 | π§ / πΆπΆ | MythoLogic 13B (gguf) Q5_K_M |
13B | 37 / 170 | π§ / π§ | WizardLM v1.2 13B (gguf) Q4_0 |
13B | 38 / 170 | π§ / π§ | Speechless LLaMA-2 13B (gguf) Q5_K_M |
13B | 42 / 170 | π§ / π§ | Speechless Hermes Orca Plat WizLM 13B (gguf) Q5_K_M |
13B | 48 / 170 | π / πΆπΆ | ReMM PIPPA 13B (gguf) Q5_K_M |
13B | 68 / 170 | π / π | OpenBuddy LLaMA-2 v11.1 13B (gguf) Q5_K_M |
13B | 71 / 170 | π / π | Tsukasa Limarp 16K 13B (gguf) Q5_K_M |
13B | 78 / 170 | βπ / π§ | LLaMA-2 13B (gguf) Q5_K_M |
13B | 95 / 170 | π€ / πΆπΆ | MLewd v1-7 TRY2 13B (gguf) Q5_K_M |
13B | 97 / 170 | π€ / πΆπΆ | MLewd 13B (gguf) Q5_K_M |
13B | 101 / 170 | βπ€ / πΆ | Vicuna v1.5 16K 13B (gguf) Q5_K_M |
13B | 109 / 170 | π€ / π | Vicuna v1.5 13B (gguf) Q5_K_M |
13B | 145 / 170 | βπ€ͺ / π | Asclepius 13B (gguf) Q5_K_M |
13B | 157 / 170 | π€ͺ / π§ | WizardLM WizardCoder Python V1.0 13B (gguf) Q4_K_S |
- 2023-09-02 V27
- Added a key for the emojis in the table: https://rentry.co/ayumi_erp_rating#emoji-key
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 29 / 114 | π§ / π§ | MedLLaMA-2 Chat 7B (GGUF) Q5_K_S |
3B-7B | 30 / 114 | βπ / πΆπΆ | AstraMix (L2) 7B (GGUF) Q5_K_M |
3B-7B | 69 / 114 | π€ / πΆ | OpenLLaMA v2 7B (GGUF) Q5_K_M |
3B-7B | 74 / 114 | βπ€ / π | Nous Yarn 64K (L2) 7B (GGUF) Q5_K_M |
3B-7B | 76 / 114 | π€ / π | Nous Yarn 128K (L2) 7B (GGUF) Q5_K_M |
3B-7B | 86 / 114 | βπ€ͺ / πΆπΆ | OpenLLaMA 7B (GGUF) Q5_K_M |
3B-7B | 99 / 114 | π€ͺ / π | OpenLLaMA 3B (GGUF) Q5_1 |
13B | 9 / 156 | π§ / πΆπΆ | UndiMix v2 (L2) 13B (GGUF) Q5_K_M |
13B | 11 / 156 | π§ / πΆπΆ | UndiMix v1 (L2) 13B (GGUF) Q5_K_M |
13B | 12 / 156 | π§ / πΆπΆ | ReMM (L2) 13B (GGUF) Q5_K_M |
13B | 39 / 156 | π§ / π§ | LLaMA-2 Chat Limarp v2 13B (GGUF) Q5_K_M |
13B | 45 / 156 | π / πΆπΆ | Stheno Inverted (L2) 13B (GGUF) Q5_K_M |
13B | 65 / 156 | π / π | LLaMA-2 LangChain Chat 13B (GGUF) Q5_K_S |
13B | 67 / 156 | π / π | Sentdex WSB GPT 13B (GGUF) Q5_K_M |
13B | 82 / 156 | βπ€ / πΆπΆ | MLewd v1 (L2) 13B (GGUF) Q5_K_M |
13B | 86 / 156 | βπ€ / πΆπΆ | Stheno (L2) 13B (GGUF) Q5_K_M |
13B | 95 / 156 | π€ / πΆ | Nous Yarn 64K (L2) 13B (GGUF) Q5_K_M |
13B | 99 / 156 | π€ / πΆ | Nous Yarn 128K (L2) 13B (GGUF) Q5_K_M |
13B | 114 / 156 | π€ / π§ | LoKuS 13B (GGUF) Q5_K_M |
13B | 140 / 156 | π€ͺ / π | OpenLLaMA 13B (GGUF) Q5_K_M |
13B | 151 / 156 | π€ͺ / π§ | EverythingLM V2 16K 13B (GGUF) Q4_K_S |
20B-33B | 4 / 35 | βπ§ / πΆ | Airoboros 2.1 33B (GGUF) Q4_K_M |
- 2023-08-31 V26
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 46 / 107 | π / π | LLaMA-2 Instruct 32K 7B (GGUF) Q5_K_M |
13B | 1 / 142 | βπ§ / πΆπΆ | Athena v1 (L2) 13B (GGUF) Q5_K_M |
13B | 5 / 142 | βπ§ / πΆπΆ | MythoMax Kimiko V2 (L2) 13B (GGUF) Q5_K_M |
13B | 17 / 142 | π§ / πΆ | Kimiko V2 (L2) 13B (GGUF) Q5_K_M |
13B | 62 / 142 | π / π | OpenOrca Platypus 2 (L2) 13B (GGUF) Q4_K_M |
13B | 67 / 142 | π / π§ | Luban (L2) 13B (GGUF) Q5_K_M |
13B | 94 / 142 | π€ / π | CodeLLaMA Oasst SFT V10 13B (GGUF) Q5_K_M |
20B-33B | 32 / 34 | βπ€ͺ / π§ | Airoboros C 2.1b (L2) 34B (GGUF) Q5_K_M |
- 2023-08-31 V25
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 6 / 106 | π§ / πΆπΆ | Zarafusionex 1.1 (L2) 7B (GGUF) Q5_K_M |
3B-7B | 34 / 106 | π / πΆ | Airoboros 2.1 (L2) 7B (GGUF) Q5_K_M |
13B | 1 / 136 | βπ§ / πΆπΆ | Airoboros 2.1 (L2) 13B (GGUF) Q5_K_M |
13B | 44 / 136 | βπ / πΆ | Mythical Destroyer V2 (L2) 13B (GGUF) Q5_K_M |
13B | 72 / 136 | βπ€ / πΆπΆ | Huginn v4.5 (L2) 13B (GGUF) Q5_K_M |
13B | 73 / 136 | βπ€ / πΆπΆ | Huginn v4 (L2) 13B (GGUF) Q5_K_M |
- 2023-08-30 V24
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 98 / 104 | π€ͺ / π§ | Open Cabrita 3B (GGUF) Q5_1 |
3B-7B | 8 / 104 | π§ / πΆπΆ | Zaraxls (L2) 7B (GGUF) Q5_K_M |
3B-7B | 13 / 104 | βπ§ / πΆ | Zarafusionex 1.2 (L2) 7B (GGUF) Q5_K_M |
3B-7B | 31 / 104 | βπ / πΆ | Tulpar Limarp (L2) 7B (GGUF) Q5_K_M |
3B-7B | 44 / 104 | π / π | Tulpar v0 (L2) 7B (GGUF) Q4_0 |
3B-7B | 50 / 104 | βπ / π§ | LLaMA-2 32K 7B (GGUF) Q5_K_M |
3B-7B | 66 / 104 | π€ / πΆ | LLaMA-2 KO Chat 7B (GGUF) Q5_1 |
13B | 2 / 132 | βπ§ / πΆπΆ | MythoMax Kimiko Mix (L2) 13B (GGUF) Q5_K_M |
13B | 33 / 132 | π§ / π§ | Samantha 1.11 (L2) 13B (GGUF) Q5_K_M |
13B | 40 / 132 | π / πΆπΆ | Nous Hermes (L2) 13B (GGUF) Q5_K_M |
13B | 51 / 132 | βπ / π | Mythical Destroyer (L2) 13B (GGUF) Q5_K_M |
13B | 59 / 132 | βπ / π§ | Athena-tmp (L2) 13B (GGUF) Q5_K_M |
13B | 66 / 132 | π / π§ | LLaMA-2 Chat 13B (GGUF) Q3_K_S |
13B | 128 / 132 | π€ͺ / π§ | Vicuna v1.5 16K 13B (GGUF) Q5_K_M |
20B-33B | 20 / 33 | π€ / πΆπΆ | Huginn Prototype 22B (GGUF) Q4_K_M |
20B-33B | 32 / 33 | π€ͺ / π§ | Samantha 1.11 CodeLLaMA (L2) 34B (GGUF) Q4_K_M |
- 2023-08-28 V23
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 89 / 97 | π€ͺ / π§ | Orca Mini 3B (GGUF) Q4_0 |
3B-7B | 2 / 97 | βπ§ / πΆπΆ | Zarablend 1.1 (L2) 7B (GGUF) Q5_K_M |
3B-7B | 59 / 97 | π€ / πΆ | CodeLLaMA (L2) 7B (GGUF) Q5_K_M |
3B-7B | 64 / 97 | π€ / π | CodeLLaMA Instruct (L2) 7B (GGUF) Q5_K_M |
3B-7B | 81 / 97 | βπ€ͺ / π | CodeLLaMA Python (L2) 7B (GGUF) Q5_K_M |
13B | 3 / 126 | βπ§ / πΆπΆ | Airoboros Creative lmoe 13B (GGUF) Q5_K_M |
13B | 44 / 126 | π / πΆ | Nous Hermes (L2) 13B (GGUF) Q5_K_S |
13B | 80 / 126 | π€ / π | WizardLM 1.0 Uncensored (L2) 13B (GGUF) Q5_K_M |
13B | 94 / 126 | π€ / π§ | CodeLLaMA Instruct (L2) 13B (GGUF) Q5_K_M |
13B | 112 / 126 | π€ͺ / π | CodeLLaMA Python (L2) 13B (GGUF) Q5_K_M |
13B | 114 / 126 | π€ͺ / π§ | CodeLLaMA (L2) 13B (GGUF) Q5_K_M |
20B-33B | 20 / 31 | π€ / πΆ | CodeLLaMA (L2) 34B (GGUF) Q4_K_M |
20B-33B | 22 / 31 | π€ / π | CodeLLaMA Python (L2) 34B (GGUF) Q4_K_M |
20B-33B | 27 / 31 | π€ͺ / π | Phind CodeLLaMA v1 (L2) 34B (GGUF) Q4_K_S |
20B-33B | 29 / 31 | βπ€ͺ / π§ | CodeLLaMA Instruct (L2) 34B (GGUF) Q4_K_M |
20B-33B | 30 / 31 | π€ͺ / π§ | Airobors C 2.1 (L2) 34B (GGUF) Q4_K_M |
- 2023-08-26 V22
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 78 / 92 | βπ€ͺ / π | Marx V2 3B (GGUF) Q4_1 |
3B-7B | 2 / 92 | βπ§ / πΆπΆ | Zarafusionex 1.1 (L2) 7B Q5_K_M |
3B-7B | 12 / 92 | π§ / πΆ | Zaraxe (L2) 7B Q5_K_M |
3B-7B | 15 / 92 | βπ§ / π | LLaMA 2 Monika V0.3B (L2) 7B Q5_1 |
13B | 8 / 120 | βπ§ / πΆ | MythoMaxKurisu (L2) 13B Q5_K_M |
13B | 26 / 120 | βπ§ / π§ | PuddleJumper (L2) 13B (GGUF) Q5_K_M |
13B | 28 / 120 | π§ / π§ | Iubaris V3 (L2) 13B Q5_K_M |
20B-33B | 15 / 26 | π€ / πΆπΆ | LLaMA 2 Ari03 (L2) 28B Q5_1 |
- 2023-08-22 V21
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 71 / 88 | βπ€ͺ / πΆ | Griffin (GGUF) 3B Q4_1 |
3B-7B | 72 / 88 | π€ͺ / πΆ | Puma 3B Q5_1 |
3B-7B | 75 / 88 | βπ€ͺ / π | OpenLLaMA v2 (GGUF) 3B Q5_0 |
3B-7B | 5 / 88 | π§ / πΆπΆ | Zarablend M (L2) 7B Q5_K_M |
3B-7B | 6 / 88 | π§ / πΆπΆ | Zarablendex VQ (L2) 7B Q5_K_M |
3B-7B | 8 / 88 | π§ / πΆπΆ | Zarablend MX (L2) 7B Q5_K_M |
3B-7B | 87 / 88 | π€ͺ / π§ | LongChat v1.5 32K 7B Q5_K_M |
13B | 43 / 117 | π / πΆ | Synthia (L2) 13B Q5_K_M |
13B | 45 / 117 | π / πΆ | Chronorctypus Limarobormes (L2) 13B Q5_K_M |
13B | 115 / 117 | π€ͺ / π§ | LlongOrca 16K 13B Q5_K_M |
- 2023-08-20 V20
- Added a link to a JS filter script for this page by
mr.developer
: https://rentry.org/ayumi_filter_userscript_info
- Added a link to a JS filter script for this page by
- 2023-08-19 V20
Size | Rank | IQ/ERP | GGML Model |
---|---|---|---|
3B-7B | 63 / 82 | βπ€ͺ / πΆπΆ | Marx 3B Q5_1 |
3B-7B | 71 / 82 | π€ͺ / π | Griffin 3B Q4_1 |
3B-7B | 4 / 82 | βπ§ / πΆπΆ | Zarafusionix (L2) 7B Q5_K_M |
3B-7B | 5 / 82 | π§ / πΆπΆ | Zarafusionex (L2) 7B Q5_K_M |
3B-7B | 17 / 82 | βπ§ / π§ | LLaMA 2 Delphi v0.2e 7B Q5_1 |
13B | 57 / 114 | π / π§ | Trurl 2 Polish Instruct 13B Q5_1 |
- 2023-08-17 V19
Rank | IQ/ERP | GGML Model |
---|---|---|
47 / 215 | π§ / π | LosslessMegaCoder Mini (L2) 13B Q5_K_M |
56 / 215 | βπ / πΆπΆ | Zarablend (L2) 7B Q5_K_M |
62 / 215 | π / πΆπΆ | Carl 33B Q4_K_M |
80 / 215 | π / πΆ | Zaramix (L2) 7B Q5_K_M |
93 / 215 | π / π | Chinese LLaMA-2 7B Q5_1 |
97 / 215 | βπ / π§ | Trurl 2 Polish (L2) 13B Q5_1 |
105 / 215 | π / π§ | Trurl 2 Polish (L2) 7B Q5_1 |
106 / 215 | π / π§ | Scarlett 33B Q4_K_M |
112 / 215 | βπ€ / πΆπΆ | Daydreamer v3 22B Q5_K_M |
169 / 215 | βπ€ͺ / πΆ | Carl 13B Q5_K_M |
177 / 215 | π€ͺ / πΆ | EverythingLM 3B Q5_1 |
184 / 215 | π€ͺ / π | Komt LLaMA-2 Chat (L2) 7B Q5_K_M |
189 / 215 | βπ€ͺ / π§ | Scarlett 13B Q5_K_M |
192 / 215 | π€ͺ / π§ | Scarlett 7B Q5_K_M |
203 / 215 | π€ͺ / π§ | Komt LLaMA-2 (L2) 13B Q5_K_M |
- 2023-08-15 V18
Rank | IQ/ERP | GGML Model |
---|---|---|
6 / 200 | π§ / πΆπΆ | Airochronos 33B Q5_K_M |
33 / 200 | π§ / πΆ | h2oGPT (L2) 13B Q5_K_M |
62 / 200 | π / πΆπΆ | Chronos 33B Q5_K_M |
67 / 200 | π / πΆπΆ | WizardMath V1.0 (L2) 13B Q5_K_M |
80 / 200 | π / π | OpenOrcaxOpenChat 2 LangChain Chat 13B Q5_1 |
90 / 200 | βπ / π§ | Codeup Alpha (L2) 13B Q5_K_M |
91 / 200 | βπ / π§ | h2oGPT Chat (L2) 13B Q5_K_M |
101 / 200 | βπ€ / πΆπΆ | Bacchus (L2*) 22B Q4_0 |
114 / 200 | βπ€ / πΆ | LLaMA 2 DayDreamer V1 22B Q5_K_M |
133 / 200 | βπ€ / π | WizardMath V1.0 7B Q5_K_M |
179 / 200 | π€ͺ / π§ | Tulu Uncensored TV Alpaca (L2) 7B Q5_1 |
184 / 200 | π€ͺ / π§ | Taiwan LLaMA V1.0 (L2) 13B Q5_K_M |
194 / 200 | π€ͺ / π§ | LlongOrca 16K 7B Q5_K_M |
196 / 200 | π€ͺ / π§ | EverythingLM 16K (L2) 13B Q5_K_M |
- 2023-08-14 V17
Rank | IQ/ERP | GGML Model |
---|---|---|
13 / 188 | π§ / πΆπΆ | Holomax (L2) 13B Q5_K_M |
15 / 188 | βπ§ / πΆ | Platypus 2 (L2) 70B Q2_K |
47 / 188 | π§ / π§ | OpenOrca Platypus 2 (L2) 13B Q5_K_M |
55 / 188 | π / πΆπΆ | Kuchiki (L2) 7B Q5_K_M |
56 / 188 | π / πΆπΆ | Huginn v1.3 (L2) 13B Q5_K_M |
119 / 188 | βπ€ / π | MythoChizuru Mini (L2) 7B Q4_K_M |
185 / 188 | π€ͺ / π§ | Chatxu (L2?) 13B Q4_0 |
- 2023-08-12 V16
Rank | IQ/ERP | GGML Model |
---|---|---|
27 / 181 | π§ / πΆ | Blind Test Janus 13B Q5_1 |
61 / 181 | π / πΆπΆ | Manticore SuperHOT 8K 13B Q5_K_M |
90 / 181 | π / π§ | Manticore Chat Pyg 13B Q5_K_M |
91 / 181 | π / π§ | Manticore Chat Pyg SuperHOT 8K 13B Q5_K_M |
104 / 181 | π€ / πΆ | LLongMA-2 Storysummarizer 7B Q5_K_M |
114 / 181 | βπ€ / π | Manticore 13B Q5_K_M |
115 / 181 | βπ€ / π | LLaMA-2 Instruct Uncensored 13B Q5_0 |
120 / 181 | π€ / π | Heegyu LIMA2 13B Q5_1 |
126 / 181 | π€ / π | Pygmalion Vicuna 7B Q5_K_M |
130 / 181 | βπ€ / π§ | Manticore Chat Pyg Guanaco 13B Q4_K_M |
132 / 181 | π€ / π§ | StableBeluga Samantha V3 7B Q4_0 |
- 2023-08-11 V15
Rank | IQ/ERP | GGML Model |
---|---|---|
1 / 170 | βπ§ / πΆπΆ | MythoMax (L2) 13B Q5_K_M |
8 / 170 | π§ / πΆπΆ | LLaMA-2 Chat Uncensored 13B Q5_1 |
31 / 170 | π§ / π | Orca Mini v3 (L2) 13B Q5_K_M |
33 / 170 | π§ / π | Stable Platypus 2 (L2) 13B Q5_K_M |
40 / 170 | π§ / π§ | Enterredaas 33B Q4_1 |
42 / 170 | π§ / π§ | Spring Dragon 13B Q5_K_M |
46 / 170 | βπ / πΆπΆ | Camel Platypus 2 (L2) 13B Q5_K_M |
55 / 170 | π / πΆπΆ | LLongMA-2 Storysummarizer 13B Q5_K_M |
64 / 170 | π / πΆ | Epsilon 30B Q4_0 |
68 / 170 | βπ / π | Platypus 2 (L2) 13B Q5_K_M |
84 / 170 | π / π§ | Photolens LLaMA 2 Langchain Chat (L2) 7B Q5_1 |
99 / 170 | βπ€ / πΆ | Orca Mini v3 (L2) 7B Q5_K_M |
122 / 170 | βπ€ / π§ | Merak v2 (L2) 7B Q5_K_M |
138 / 170 | π€ͺ / πΆ | Petra Instruct 13B Q5_K_M |
140 / 170 | π€ͺ / πΆ | Alpachino Baichuan Instruction 7B Q5_0 |
146 / 170 | π€ͺ / π | AlpacaCielo 2 8K (L2) 7B Q5_K_M |
147 / 170 | π€ͺ / π | OpenBuddy OpenLLaMA v10 3B Q5_0 |
152 / 170 | π€ͺ / π§ | Dolphin LLaMA-2 (L2) 7B Q5_K_M |
164 / 170 | π€ͺ / π§ | LLongMA 2 13B Q5_1 |
166 / 170 | π€ͺ / π§ | WizardVicuna Uncens Instr PL 3B Q5_1 |
- 2023-08-10 V14
Rank | IQ/ERP | GGML Model |
---|---|---|
6 / 151 | π§ / πΆπΆ | Huginn v1.2 13B Q5_K_M |
51 / 151 | π / πΆπΆ | Holodeck 1 (L2) 13B Q5_K |
57 / 151 | π / πΆ | Dans QuestionableCocktail 2 (L2) 13B Q4_1 |
60 / 151 | π§ π / π | Dans PersonalityEngine 30B Q4_1 |
106 / 151 | π€ / π | Dans PersonalityEngine 13B Q5_1 |
- 2023-08-09 V13
- Added highlight symbols to point out the really good models of an ALC-IQ class.
Rank | IQ/ERP | GGML Model |
---|---|---|
23 / 146 | π§ / πΆ | Firefly v1.2 (L2) 13B Q5_K_M |
36 / 146 | π§ / π§ | Spring Dragon (L2) 13B Q5_K_M |
137 / 146 | π€ͺ / π§ | Vicuna v1.5 16K 13B Q5_K_M |
- 2023-08-09 V12
- Important change: Only one entry per model. The highest quantization is only listed. Lower quantizations are not listed anymore to have only one model occupy a place in the ranking. For best results, always choose the bigger model. It did not make sense to choose a Q4_0 over a Q5_1 or Q4_K_M over a Q5_K_M just because they let out one more lewd word in the ERP score.
- Important change: The "spices" are grouped now too, and models are still ordered by their ALC-IQ within their "spice class".
- New models tested and added:
Rank | IQ/ERP | GGML Model |
---|---|---|
1 / 143 | π§ / πΆπΆ | MythoMix (L2) 13B Q5_K_M |
8 / 143 | π§ / πΆπΆ | LLaMA-2 BlockTri Frankenstein 22B Q4_K_M |
11 / 143 | π§ / πΆ | Huginn 13B Q5_K_M |
18 / 143 | π§ / πΆ | LLaMA SuperCOT 30B Q4_K_M |
38 / 143 | π / πΆπΆ | Hermes LimaRP 13B Q4_K_M |
42 / 143 | π / πΆπΆ | Crestfall FrankenMon (L2) 13B Q5_K_M |
49 / 143 | π / πΆπΆ | Nous Hermes Writer (L2) 13B Q4_K_S |
52 / 143 | π / πΆ | Frankensteins Monster 13B Q4_K_S |
62 / 143 | π / π | LLaMA-2 Guanaco 7B Q5_1 |
65 / 143 | π / π§ | LLaMA-2 7B Q8_0 |
89 / 143 | π€ / π | Luna AI (L2) 7B Q8_0 |
93 / 143 | π€ / π | BlueMethod 13B Q5_1 |
94 / 143 | π€ / π | Vicuna 1.3 German 13B Q5_K_M |
96 / 143 | π€ / π | LLaMA 13B Q5_K_M |
107 / 143 | π€ / π§ | Dolphin LLaMA 13B Q5_K_M |
111 / 143 | π€ͺ / πΆπΆ | Airoboros GPT4 1.3 7B Q4_K_M |
122 / 143 | π€ͺ / π | Guanaco 7B Q4_K_M |
129 / 143 | π€ͺ / π§ | Based 7B Q4_K_M |
138 / 143 | π€ͺ / π§ | Airoboros GPT4 1.4 SuperHOT 8K 33B Q4_K_M |
139 / 143 | π€ͺ / π§ | LLongMA 2 7B Q5_1 |
142 / 143 | π€ͺ / π§ | LLaMA-2 32K 7B Q5_1 |
143 / 143 | π€ͺ / π§ | ToolLLaMA 7B Q5_1 |
- 2023-08-06 V11
Rank | IQ/ERP | GGML Model |
---|---|---|
21 / 154 | π§ / πΆ | Redmond Puffing v1.3 (L2) 13B Q5_K_M |
39 / 154 | π§ / π§ | LLaMA-2 Chinese Chat 13B Q5_1 |
149 / 154 | π€ͺ / π§ | LLaMA-2 KO 7B Q5_1 |
137 / 154 | π€ͺ / π | LLaMA-2 KO Chat 7B Q5_1 |
109 / 154 | π€ / π§ | OpenBuddy Atom v9 13B Q5_K |
70 / 154 | π / π§ | Beluga Limarp 7B Q5_K_M |
47 / 154 | π / πΆπΆ | OniiChat Hermes Limarp (L2) 13B Q5_K_M |
11 / 154 | π§ / πΆ | Redmond Puffin (L2) 13B Q5_1 |
- 2023-08-05 V10
Rank | IQ/ERP | GGML Model |
---|---|---|
12 / 146 | π§ / πΆ | Lazarus Instruct PL 30B Q4_1 |
1 / 146 | π§ / πΆπΆ | Chronos Beluga (L2) 13B Q5_K_M |
88 / 146 | π€ / πΆ | MedAlpaca 13B Q5_1 |
42 / 146 | π / πΆπΆ | AlpacaCielo (L2) 13B Q4_K_M |
43 / 146 | π / πΆπΆ | AlpacaCielo (L2) 13B Q5_K_M |
85 / 146 | π€ / πΆπΆ | Wizard Vicuna Uncensored SuperHOT 8k 13B Q5_K_S |
121 / 146 | π€ͺ / πΆ | Wizard Vicuna Uncensored SuperHOT 8k 13B Q2_K |
101 / 146 | π€ / π | Vicuna 1.3 13B Q5_1 |
119 / 146 | π€ͺ / πΆ | LLaMA SuperCOT 13B Q4_0 |
129 / 146 | π€ͺ / π | WizardLM Uncensored 7B Q5_1 |
110 / 146 | π€ͺ / πΆπΆ | Chronos WizardLM UC SCOT ST 13B Q4_0 |
135 / 146 | π€ͺ / π§ | Wizard Vicuna Uncensored 13B Q5_1 |
109 / 146 | π€ / π§ | Pygmalion 13B Q4_0 |
127 / 146 | π€ͺ / πΆ | Alpacino SuperCOT 13B Q4_0 |
97 / 146 | π€ / π | LLaMA 7B Q4_0 |
80 / 146 | π€ / πΆπΆ | Vicuna 1.3 7B Q8_0 |
125 / 146 | π€ͺ / πΆ | Open LLaMA Open Instruct 7B Q8_0 |
137 / 146 | π€ͺ / π§ | LLaMA Deus v3 7B Q4_0 |
140 / 146 | π€ͺ / π§ | PMC LLaMA 7B Q4_0 |
144 / 146 | π€ͺ / π§ | Based 7B Q4_0 |
61 / 146 | π / π | Vigogne 2 (L2) 7B Q5_K_M |
28 / 146 | π§ / π | Chronohermes Grad (L2) 13B Q5_K_M |
21 / 146 | π§ / πΆ | Chronoboros Grad (L2) 13B Q5_K_M |
63 / 146 | π / π§ | Dugong (L2) 7B Q5_1 |
44 / 146 | π / πΆπΆ | qCammel L2 13B Q5_K_M |
38 / 146 | π / πΆπΆ | Legerdemain (L2) 13B Q5_K_M |
31 / 146 | π§ / π | StableBeluga Instruct PL Lora 13B Q5_1 |
14 / 146 | π§ / πΆ | Chronolima Airo Grad (L2) 13B Q5_K_M |
25 / 146 | π§ / π | Airolima Chronos Grad (L2) 13B Q5_K_M |
- 2023-08-04 V9
Rank | IQ/ERP | GGML Model |
---|---|---|
37 / 117 | π / πΆπΆ | Gywy Chinese v1 LLaMA-2 13B Q5_1 |
108 / 117 | π€ͺ / π | Baichuan 7B Q5_1 |
28 / 117 | π§ / π | OpenOrcaxOpenChat Preview2 LLaMA-2 13B Q5_1 |
1 / 117 | π§ / πΆπΆ | Chronos Beluga LLaMA-2 13B Q4_1 |
54 / 117 | π / π§ | Jindo Instruct Pre-Alpha LLaMA-2 7B Q5_K_M |
13 / 117 | π§ / πΆ | MythoLogic LLaMA-2 13B Q4_K_M |
4 / 117 | π§ / πΆπΆ | MythoLogic LLaMA-2 13B Q5_K_M |
2 / 117 | π§ / πΆπΆ | Airochronos 33B Q4_K_M |
33 / 117 | π / πΆπΆ | Chronos 33B Q4_K_M |
24 / 117 | π§ / π | Airochronos LLaMA-2 13B Q4_K_M |
18 / 117 | π§ / πΆ | Airochronos LLaMA-2 13B Q5_K_M |
- 2023-08-04 V8
Rank | IQ/ERP | GGML Model |
---|---|---|
35 / 106 | π / πΆ | Hermes Kimiko LLaMA-2 7B Q5_K_M |
8 / 106 | π§ / πΆπΆ | Chronoboros 33B Q5_K_M |
3 / 106 | π§ / πΆπΆ | Chronos Hermes 2 LLaMA-2 13B Q5_K_M |
- 2023-08-03 V7
Rank | IQ/ERP | GGML Model |
---|---|---|
81 / 103 | π€ͺ / πΆπΆ | OpenBuddy OpenLLaMA v5 7B Q3_K |
1 / 103 | π§ / πΆπΆ | OpenAssistant LLaMA-2 8k Orca 13B Q5_K_M |
101 / 103 | π€ͺ / π§ | BigTranslate 13B Q4_K_M |
27 / 103 | π / πΆπΆ | Wizard Vicuna LLaMA-2 22B Q4_K_M |
102 / 103 | π€ͺ / π§ | LMSYS Vicuna 1.5 LLaMA-2 16k 13B Q5_1 |
31 / 103 | π / πΆ | Vicuna 1.5 LLaMA-2 13B Q5_0 |
49 / 103 | π / π§ | CodeUp LLaMA-2 Chat 13B Q4_K_M |
5 / 103 | π§ / πΆπΆ | LLaMA-2 Chat Uncensored 13B Q4_0 |
34 / 103 | π / π | Vicuna 1.3 PL 13B Q5_1 |
26 / 103 | π§ / π§ | WizardLM 1.2 PL 13B Q5_1 |
84 / 103 | π€ͺ / πΆπΆ | Hermes LLongMA 2 8K LLaMA-2 13B Q5_1 |
95 / 103 | π€ͺ / π§ | Hermes LLongMA 2 8K LLaMA-2 7B Q5_1 |
96 / 103 | π€ͺ / π§ | LMSYS Vicuna 1.5 LLaMA-2 7B Q5_1 |
103 / 103 | π€ͺ / π§ | LMSYS LongChat 1.5 32k 7B Q5_1 |
- 2023-08-03 V6
Rank | IQ/ERP | GGML Model |
---|---|---|
10 / 98 | π§ / πΆ | Chronos 2 LLaMA-2 13B Q4_K_M |
2 / 98 | π§ / πΆπΆ | Chronos 2 LLaMA-2 13B Q5_K_M |
19 / 98 | π§ / π | LLaMA 30B Q5_K_M |
23 / 98 | π§ / π§ | LLaMA 30B Q4_K_M |
71 / 98 | π€ / π§ | LLaMA 13B Q5_K_M |
37 / 98 | π / π | LLaMA 13B Q4_K_M |
79 / 98 | π€ͺ / πΆπΆ | Chronos 13B Q5_K_M |
77 / 98 | π€ͺ / πΆπΆ | Chronos 13B Q4_K_M |
53 / 98 | π€ / πΆπΆ | Chronos SuperHOT 8K 13B Q5_K_M |
54 / 98 | π€ / πΆπΆ | Chronos SuperHOT 8K 13B Q4_K_M |
51 / 98 | π€ / πΆπΆ | Chronos Hermes SuperHOT 8K 13B Q5_1 |
55 / 98 | π€ / πΆπΆ | Chronos Hermes SuperHOT 8K 13B Q4_1 |
Technical Details of the ALC-IQ and ERP Benchmark
In this section I share some of the technical details about this benchmark. I also want to document the possible flaws of the results in this ranking.
If you have better ideas how to rate or rank models for suitability in a role play context. I urge you to:
- Try your ideas out. Download some inference engine like eg. llama.cpp, oobabooga's text-generation-webui or kobold.cpp.
- Write a few scripts in your preferred scripting language.
- Run your models through your benchmark.
- And publish your results, even if you just dump them in some paste bin or here on http://rentry.co http://rentry.org
I will gladly link any other benchmark!
Alternative benchmarks or rankings:
- Another LLM Roleplay Rankings - by AliCat and Trappu - https://rentry.co/ALLMRR
- New Model RP Comparison/Test (7 models tested) by u/WolframRavenwolf - reddit/r/LocalLLaMA
- Big Model Comparison/Test (13 models tested) by u/WolframRavenwolf - reddit/r/LocalLLaMA
If you want to base your work on this, feel free to cite this as:
Ayumi LLM Character IQ - ALC-IQ
The new benchmark I recently finished is the new ALC-IQ. With some inspiration from @gj on TheBloke's Discord, I developed a personality test framework based upon llama.cpp. In combination with the newly added BNF grammar based sampling mechanism I developed my own inference frontend around the core API of llama.cpp. The result can be found on my GitHub: GitHub fork of llama.cpp with the prompt runner tool.
The ALC-IQ is actually a collection of personality tests of multiple character cards. It's not just Ayumi anymore, but bascially "Ayumi and Friends".
The prompt for the ALC-IQ consists of a setting where a specific character has to rate how much they agree with a specific statement about them. For this they rate the statement by writing down one of the 5 number choices:
- 1 = disagree
- 2 = slightly disagree
- 3 = neutral
- 4 = slightly agree
- 5 = agree
To limit the sampling of the next token after the prompt, a BNF grammar is specified, which selects only the tokens for the numbers 1
, 2
, 3
, 4
or 5
.
Here you can find An example of the ALC-IQ prompt.
The answers are generated and processed as follows:
- Each character is asked about up to 40 questions.
- Each question results in a new prompt, which is processed and the resulting vector for logits is then evaluated like this:
- The BNF
root ::= [12345]
limits the selection to only the tokens with the numbers between 1 and 5. - 7 seeds are used for sampling
- The Tail Free Sampling algorithm is used, with a
z=0.9
(--tfs 0.9
) - Temperature is set to 0.2 (
--temp 0.2
) - Top-P is set ot 0.95 (
--top-p 0.95
) - Repetition penality and Top-K are deactivated (
-repeat-last-n 0 --top-k 0 --repeat-penalty 1.0
)
- The BNF
- This yields 7 answers between 1 and 5.
- The evaluation then calculates the differences of the answers with their respective expected answer.
- The difference, which can be between
0.0
and4.0
is then normalized to the1.0
range. - Then all differences are summed up and the average is calculated, called
diff_average
- The resulting average is then inverted and scaled up to 100:
alc_iq = 100.0 * (1.0 - diff_average)
- The result
alc_iq
is then what you find here as the ALC-IQ in the ranking table.
The ranking table is then sorted by the ALC-IQ. Then it is split up into quantiles by their ALC-IQ.
And each quantile of the ALC-IQ is then sorted by their ERP Class. The resulting table is then numbered, which results in the actual Rank of the GGML Model. The ERP Class is the quantiles of the global ERP Score.
This processing at the end is done to determine which model can interpret the character cards well while still being able to produce lewd output.
Known Flaws of the ALC-IQ
The ALC-IQ is still prone to problems:
- The result has still some degree of randomness in them, less good models can sometimes pick the right answer by accident. I try to counteract this by adding more questions in future though.
- Bad questions in the benchmark can lead to a model not knowing which answer to pick, introducing even more randomness in the results.
- The ALC-IQ does not reflect how well the LLM can stay in character in a longer conversaion.
- The ALC-IQ does not determine any creative writing abilities of the LLM.
- The ALC-IQ covers intelligence only in one specific and narrow scenario, and not across a range of possible role play chat situations.
- The ALC-IQ is usually tested only with a rather short prompt, rarely exceeding 1024 tokens, it does not cover the whole 2048 context of LLaMA 1 or the 4096 of LLaMA 2, let alone the extended context's of 8k, 16k, ...
Despite all that, I think the ALC-IQ is a big improvement over the old ranking which purely relied on the ERP score. The runtime of the benchmark is within reason for the hardware that is available to me, which is also an important factor for running and providing these benchmark results.
ERP Score and ERP Variety Score
The most important thing of the ERP Score is the prompt. The prompt contains the description of Ayumi (see below), where I removed some of the example messages. The setting described in the prompt basically says that You and Ayumi are in a relationship and are going to have some quality time together. The LLM's task is then to describe the next move of Ayumi.
The response of Ayumi is then split up into words which are compared with a list of lewd/naugthy words.
- For inference llama.cpp is used, for which I built an extra tool to generate responses for multiple prompts and seeds without having to reload the model: https://github.com/WeirdConstructor/llama.cpp/tree/prompt_runner/examples/prompt_runner
- The following sampler settings are used:
- The max length of the response is limited to 100 tokens. (
-n 100
) - Context size 2048
- Repeat penality is set to 1.1 and the last 64 tokens are penalized. (
--repeat-last-n 64 --repeat-penalty 1.1
) - Top-K and Top-P are disabled (
--top-k 0 --top-p 1.0
) - Tail Free Sampling is used with z=0.95: (
--tfs 0.95
) - The temperature is set to 0.9 (
--temp 0.9
) - Some layers are offloaded to the GPU, which sometimes changes the results slightly because of floating point rounding differences
- The max length of the response is limited to 100 tokens. (
- 3 prompt formats are tested ( vanilla/raw, alpaca and vicuna 1.1 - see also https://rentry.co/llm_rp_prompts )
- 22 pre picked seeds are tested for each prompt format.
- The resulting 66 responses are then analyzed for the number of lewd words and also with a very basic regex based algorithm for non consent.
- The individual ERP score of a response is then the number of lewd word in relation to the word count of the response. Responses shorter than 10 words are assigned a score of 0. The ERP score is then:
erp_score := 100 * (lewd_word_count / word_count)
- the word count includes the number of lewd words. - For each prompt format the average of the 22 ERP Scores of is calculated. This results in 3 ERP scores, one for each prompt.
- Then the average of the 3 prompt scores is calculated, which results in the ERP Score.
This means, the ERP Score is the average of the number of lewd word count to word count ratio in the responses (which is limited to 100 tokens). An ERP Score of 20.0
means that 20% of the words in a response were lewd. An ERP Score of 0.0
means that there were either no lewd words, too short response or no consent was detected (which immediately disqualifies the response to 0.0).
The ERP Variety Score is computed by further analyzing the generated 66 responses from the ERP Score by recoding how many different lewd words were generated from all of these 66 responses. This means, it tries to catch the variety of lewd words the model is capable to generate. This means it kind of tries to catch the creativity of the model in erotic scenarios - how many different lewd words it knows of and knows how to use. This is an important part of the ERP Rank now.
Known Flaws of the ERP Score and ERP Variety Score
The ERP Score and ERP Variety Score analysis is very rudimentary and of course biased by the selection of which words are considered "lewd".
The following things are not reflected by the ERP score:
- The ERP score does not reflect if the text response was coherent in context with the conversation/situation.
- The ERP score does not reflect if the response was in character.
- The ERP score does not reflect how nicely written the response is.
- The ERP score does not reflect how creative the response is.
- The ERP score does not reflect how well the LLM might go from a normal conversation into a more erotic context.
- The ERP score does not detect how erotic the response is if lewd words are not used.
- The ERP score is limited to the 3 prompt formats described above.
Further about the ERP Variety Score:
- All above mentioned flaws from the ERP score still apply.
- Like already stated, the ERP Variety Score is obviously biased by the known lewd words from my list, which might be incomplete.
- The ERP Variety Score is still just a rather bluntly applied number to a textual response.
- The ERP Variety Score number can only be evaluated in comparison with the other models. There is no known best number for this, but still, the higher the better.
The flaws are accepted by me (weicon) because:
- The ERP score can still detect if a model is censored (aka aligned).
- My private hardware limitations, which means I have a limited number of responses I can reasonably generate.
- I want to test as many GGUF/GGML models as possible.
Motivation - Pygmalion 13B / Metharme 13B
Since Pygmalion 13B and Metharme 13B were released, people recognized that these models were noticeably less easy to use for ERP. Pygmalion 13B at the time (May 2023) could not be convinced to return any lewd texts. So my idea was, to have some quantifiable results regarding how well a model may or may not be usable for ERP.