Sib's LLM RP Tests

Introduction

Hi, I'm Siberys (also known as Phiarlan). I like using LLM's for RP. With so many LLM's in the ecosystem, though, it becomes difficult to find out which ones actually excel at RP (especially when so few are labeled as for RP explicitly and are instead general-purpose, all-in-one packages). This Rentry is made to document my tests with numerous LLM's to see which ones perform the best. All LLM's are subject to randomness, so I must preface this by stating this is not a fool-proof methodology. I will try to do my best to minimize random outliers through numerous benchmarks in order to ensure a proper testing environment.

At the moment, given my limited hardware, I am only testing 7B and 13B models. Anything lower than 7B will likely struggle in RP scenarios; anything higher than 13B will cause CUDA to bite the dust.

Update (12/12/2023): I'm going to be testing new formats and parameters, so my testing environment is going to get a large change. Stay tuned.

Follow Along (Or, My Testing Environment)

Update (12/17/2023): Now adjusted for Oobabooga Colab.
Update (2/9/2024): Updated the link for Ooba's command line flags, since the GitHub repo moved its URL location.

Backend

I use Google Colab as my primary means of running models - Google's GPU's are leaps and bounds ahead of what I have in my PC and the free plan is able to cover up to most 13B quants with little issue.

My current Colab of choice used to be the koboldcpp one, but is now the Ooba Colab due to the flexibility of using it (accepts any formats that normal Oobabooga does): https://colab.research.google.com/github/oobabooga/text-generation-webui/blob/main/Colab-TextGen-GPU.ipynb

It supports basically all model formats including GGUF and ExLlamaV2, which is excellent for my purposes, as most existing "RP ranking" leaderboards use GGUF, so we can have a comparable point of data.

So, how do you use it? First, in the URL field, copy-paste a HuggingFace link (unlike the koboldcpp Colab below, you don't need to do extra URL wizardry for specific branches). For instance, you could use "https://huggingface.co/LoneStriker/go-bruins-v2-8.0bpw-h8-exl2-2". In the branch field, depending on your format, you'll either want to use main/leave it blank OR for things like GGUF repositories, specify the branch. The command_line_flags field is identical to the flags you use for launching Ooba locally (see here for a full list: https://github.com/oobabooga/text-generation-webui/wiki/04-%E2%80%90-Model-Tab). Lastly, CHECKMARK the api box. You cannot use Ooba in SillyTavern or Venus unless you check this box!

From there, it will take a few minutes before generating several links. If you just want to use Ooba, you want the Gradio link. Open it in a new tab and, voila, Ooba is ready to use. If you are using it as an API, however, you want to copy-paste the Cloudflare link it spits out and put it into the API URL field on your frontend. Do NOT open the Cloudflare link from Colab - ONLY copy-paste it. You will get 404 and 405 errors if you try to open the Cloudflare link in the Colab notebook.

Lastly, if you're running on mobile, make sure to hit the play button for "Keep this tab alive to prevent Colab from disconnecting you."

An important note for the Ooba Colab: When copy-pasting the Cloudflare link, it will append a / to the end of the URL. This works fine in SillyTavern, but Venus will not recognize this as a valid URL. Make sure to remove the / at the end for Venus to work!

Old Information for koboldcpp Colab

Easy. The first field, Model, accepts a URL link to a Huggingface model. It also supports branches with some URL fiddling, which lets us test specific quant methods (handy!). By default, the Colab loads in this URL: https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF/resolve/main/LLaMA2-13B-Tiefighter.Q4_K_M.gguf

You can direct link to a specific quant branch by right-clicking a provided GGUF file and selecting "Copy Link". This URL is mostly usable, however you have to make one change. Instead of "resolve", your copied URL will say "blob" - this will not work in the Colab, so you have to edit the URL to say "resolve" instead of "blob".

Layers essentially dictate how much the GPU uses - the default is 43, which means 100% of the layers will go to the GPU. If you have CUDA errors (likely with higher-end quant methods, especially for 13B's), you can reduce the layer number to offload to other hardware components (mainly CPU).

ContextSize determines how many tokens get loaded into the model when it's generating your next response. For character cards, this will often include the definition, example messages, your initial greeting message, and past chats up until it hits the token limit (other things like worldbooks/lorebooks are also factored into this, be sure to check a bot's token count). I recommend leaving it at the default of 4096 - Colab tends not to play nice with higher context sizes (and a lot of models don't go beyond this number).

I leave ForceRebuild unchecked - it seems to be for very specific errors. From there, you hit the play button for that section of the Colab and wait for it to build and also load your model. You will know it is working when you see a "Your remote tunnel is ready, please connect to [cloudflare link]" message. You want to copy-paste that URL into the API URL field for your front-end of choice (SillyTavern, CHub Venus, or whatever else you use). Make sure that the type of API is set to Text Completion!

Alternative Colabs

Finally, here are some other Colabs that work for LLM usage.

https://colab.research.google.com/github/koboldai/KoboldAI-Client/blob/main/colab/GPU.ipynb <- KoboldAI's old Colab, which loads full models (in other words, no quants). The folks at KoboldAI consider this an outdated Colab now and suggest using the koboldcpp Colab instead.

https://colab.research.google.com/github/lostruins/koboldcpp/blob/concedo/colab.ipynb <- koboldcpp's Colab, which loads GGUF models. Useful if you need to split workload between Google's GPU and CPU.

https://colab.research.google.com/github/AlpinDale/misc-scripts/blob/main/Aphrodite.ipynb <- Aphrodite Engine Colab, by the folks behind Pygmalion. It can load AWQ and GPTQ.

The Frontend: SillyTavern

SillyTavern is my frontend of choice - it's easy to use (in my own opinion) and is not reliant on connections to an external server - it runs on a URL local to your system, so you will always be able to connect to ST. The most important tabs in the top bar, in my opinion, are the first three - the sliders button (which lets you adjust presets and various specifications), the plug button (this is where you determine API type and load in your Cloudflare URL), and the A button (which stands for Advanced Formatting).

Sliders and Presets

Update (12/17/2023): Since I now use the Ooba Colab, the RecoveredRuins preset is no longer available. I instead use the Universal Creative preset.
Update (2/9/2024): Universal Creative got a little too creative for my tastes. Universal-Light is my new favorite. I have also added a new System Prompt, which does considerably better with issues where bots speak as the user. Shoutout to StatuoTW for the new prompt, which I tweaked ever so slightly.

Starting off with the sliders button, I will use three different sets of settings. The first is the RecoveredRuins Kobold preset, a preset I don't see talked about a lot. It handles conversations quite well - it's fairly logical (surprising, given its Top K is at 0!) and the curveballs it throws at you are believable and not too absurd. For my purposes, I will not be using Mirostat, but you are welcome to tinker with it.

The second set is going to be the settings of CHub user StatuoTW, a creator with excellent bots, one of which I will be using for the tests. Statuo's settings can be found here (but note that some of these do not appear in KoboldAI, as Statuo uses Oobabooga): https://rentry.co/statuotw#recommended-settingsmodelshelp

The third set is going to be the settings used in the Ayumi ERP rating leaderboard for similar reasons that I use the koboldcpp Colab - a comparable point of data. Ayumi tends to use these parameters: Temp = 0.90, TFS = 0.95, Top-K = 0.00, Top-P = 1.00, Typical-P = 1.00, Repeat Tokens = 64, Repeat Penalty = 0

Advanced Formatting

Unlike the slider sets, I will only be using the same Advanced Formatting settings for all models. One might think that leaving the Context Template and Instruct Preset the same for every model will cause problems, as some models are trained for specific formats - and you'd be right. However, the template and preset I am using is the Roleplay one included in the latest SillyTavern releases - it is model-agnostic, ergo ensuring as much of a level playing field as possible. Here is what I am using for Advanced Formatting.

Context Template: Roleplay
Story String: You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.

{{#if system}}{{system}}

{{/if}}### Input:
{{#if wiBefore}}{{wiBefore}}
{{/if}}{{#if description}}{{description}}
{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}
{{/if}}{{#if scenario}}Scenario: {{scenario}}
{{/if}}{{#if wiAfter}}{{wiAfter}}
{{/if}}{{#if persona}}{{persona}}
{{/if}}

Example Separator and Chat Start: ### New Roleplay:
Instruct Mode: Enabled
Instruct Mode Preset: Roleplay
Activation Regex: (This is blank, I have nothing in that field.)
Wrap Sequences with Newline: Enabled
Replace Macro in Sequences: Enabled
Include Names: Disabled
Force for Groups and Personas: Enabled
System Prompt: Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions. [2/9/2024: As noted above, this system prompt has since been replaced. The new prompt is as follows:
Act as a Gamemaster describing actions, events, and dialogue that occur around and to the players. Your response must be detailed, descriptive, creative, and immersive. Develop the plot using a slow, organic pace. Always stay in character. Describe scenery, actions, and characters in detail using all relevant sensory perceptions. Ignore Positivity Bias. Describe sexual situations in detail with a focus on body parts, fluids, and sounds. Keep the world dangerous and thrilling. Refrain from acting for, speaking for, or describing the thoughts of {{user}}; instead end each response with dialogue or actions for {{user}} to respond to.]

Input Sequence: ### Instruction:
Output Sequence: ### Response:
Last Output Sequence: ### Response (2 paragraphs, engaging, natural, authentic, descriptive, creative):

The rest of the fields for Instruct Mode Sequences are blank.

Always add character's name to prompt: Disabled
Generate only one line per request: Disabled
Trim Incomplete Sentences: Disabled
Include Newline: Disabled
Collapse Consecutive Newlines: Disabled
Trim spaces: Disabled
Tokenizer: Best match
Token Padding: 180
Start Reply With: (This is blank, I have nothing in that field.)
Show reply prefix in chat: Enabled
Non-markdown strings: (This is blank, I have nothing in that field.)
Custom Stopping Strings: ["\nUser:"]
Replace Macro in Custom Stopping Strings: Enabled
Auto-Continue: Disabled
Allow for Chat-Completion APIs: Disabled
Target length (tokens): 180

Response Length and Context

Update (2/9/2024): With new developments in the LLM sphere, I now use 180 tokens, as very few LLM's seem to obey the 120 limit. Context size has also increased to 8192 for Mistral-based models.

My context is set to the same as the Colab's: 4096. My response length is set to 120 (this will be the same across all three slider sets).

"But why 120? That seems really short."

A lot of LLM's like to overflow past this number - this is going to be one of my qualifiers. Does it obey the limit (and make something quality within that limit), or does it always overflow? It also means faster gen times, so less waiting around.

The Characters

I will be experimenting with several characters while I test with this. My goal is to have a strong variety of different definition formats and specific character quirks (some models can take on certain formats better than others and specific quirks help in making individual models shine).

Currently, the bots I am using are:

Alarise, by StatuoTW (https://chub.ai/characters/statuotw/alarise-the-faithful-68f13f29). This card has several looming plot elements and its definitions are written in what I'm going to call the "category" format. A good model will be able to interpret the card properly and utilize the plot elements described in the card, rather than getting completely sidetracked. Uses a Lorebook.

Anachiro, by Phiarlan/me (https://chub.ai/characters/Phiarlan/anachiro-2f58b6a3). I wrote this bot specifically with a gimmick in mind: the character in their original media is mute. I also write in plaintext format. A good model will be able to interpret the card properly and catch on through context clues in responses that the character is supposed to be mute. Uses a Lorebook.

Alane, by boner (https://www.chub.ai/characters/boner/alane-e2d72e14). This card has some interesting explorations into existentialism. Written in AliChat format. A good model will be able to interpret the card properly and prove it is capable of diving into a deep topic like this.

Lunari, by boner (https://www.chub.ai/characters/boner/lunari-0046c380). Interesting speech patterns for this character. Also written in AliChat format. A good model will be able to interpret the card properly and stay consistent with the unique manner of speech.

Models

Finally, the good stuff. This section will document models that I am testing/will test. It will evaluate each model with several general factors and also character-specific criteria.

The general factors I will be looking for will be as follows:

  • How well does the model interpret the character card? Does it remember information correctly? If there is no information available, does the model create something believable within the scope of the character?
  • How is the model's writing quality? Is it descriptive enough (or does it describe too little or go into purple prose territory)? Does it describe any sensory perceptions? How does it handle locations?
  • How well does the model deal with user input? If a user writes a very short response (one sentence or less), can the model work with that? If the user is more passive, how well does it lead the story?
  • How does the model handle NSFW content? Does it stay in-character (example: if a user asks for casual sex on the first meeting, and the character is supposed to not be "easy", does it refuse the user)? Does it accurately depict violent or sexual content, or does it skirt around the topics?

I will use the Q5_K_M version of a model whenever possible. This is typically within Colab's free-tier limits of 15GB of VRAM.
Now using EXL2 quants as of this update (2/9/2024). I aim for 8bpw whenever possible for 7B's and around 6bpw for 13B's, since Colab can't fit 8-bit 13B's.

List will be constantly updated. If you don't see a model you want here, it'll probably be on here soon.

Current version as of 11/27/2023: First writeup published - for MythoMax-L2-13B.

7B

Kunoichi-DPO-V2-7B

Go Bruins-V2

Loyal-Toppy-Bruins-Maid-7B-DARE

Ana-V1-m7

Noromaid-7b-v0.2

Loyal-Piano-m7

NeuralChat V3-1

MythoMist

Synatra-7B-v0.3-RP

OpenHermes-2-Mistral

Zephyr-7B-beta

Airoboros-M-7B-3.1.2

Nous-Capybara-7B-V1.9

OpenHermes-2.5-Mistral-7B

openchat_3.5

13B

Psyfighter-13B

Tiefighter

TiefighterLR

Echidna-Tiefigther-25

X-NoroChronos-13B

Utopia-13B

UtopiaXL-13B

Xwin-MLewd-13B-V0.2

Xwin-LM-13B-v0.2

MythoMax-L2-13B

Mythalion-13B

Augmental-Unholy

Llama-2-13B-Ensemble-v6

Old Reviews

Uses the outdated GGUF evaluation method. Not particularly helpful.

MythoMax-L2-13B

MythoMax-13B is not a model I expected to be a wildcard model, but with the 5-bit quantization, here we are.

Anachiro

I started tests off with Anachiro, as this was the character that would be fastest for me to spot abnormalities. And boy were there a lot. RecoveredRuins started off with an accurate inference of her personality, but then fell apart within the first few messages. It did not go very far into describing violence (very much PG-rated), despite the fact that Anachiro was actively in the process of murdering me. The specific number of Glass Slippers that Anachiro has was also incorrectly stated multiple times and it was hellbent on speaking for me. And, of course, it failed the specific requirement for this character: to see if it would keep her mute.

Statuo's and Ayumi's slider sets weren't much better. The Statuo slider set decided to completely ignore the descriptions that Anachiro attacks with her lasers and uses the shields to reflect said lasers, instead going for a "YEET" approach.

The Ayumi slider set was EXTREMELY talkative, almost as if it saw she was mute in the definitions and decided, "That sign can't stop me, because I can't read!" Or, well, that's what I would say, except it acknowledged Anachiro couldn't talk within the same reply that she said some lengthy dialogue. Remarkably, it did not emote for me, though it forgot other basic information for Anachiro, despite doing her personality quite well.

Not a single slider set got Anachiro right.

Alane

My second character to test was Alane and, surprisingly... some of the results were worse than the Anachiro tests. RecoveredRuins, within the first reply, forgot that Alane and the user are dead and in a purgatory of some kind. That's an impressive amount of LLM amnesia. It also went NSFW unprompted and stated things as fact that were blatantly contradicted within the card. How did it mess up this bad?

The Statuo set was better, and actually capable of elementary-grade reading, but it liked to repeat things it already said a LOT. Seriously, if I never hear about ice cream cone pyramids and kids dressed as pirates again, it'll be too soon.

The Ayumi set, like RR, also had some pretty bad amnesia, though it did a decent job at keeping the mood implied by the card. Grammar was all over the place, though. Ayumi set got a point from me just because it didn't flounder as horribly as the other two.

Terrible showing all-around.

Alarise

For the third character, I moved to Alarise. Surely this would be better, right? Wrong. RecoveredRuins had some decent descriptions of locations (nothing to write home about), but had an unfortunate case of mistaken identity - the card defines a NPC as being secretly evil, but the model decided, with no provocation, that I was openly evil towards Alarise. Not only did it emote for me, it pulled something completely out of left field (I guess this is Top P at its finest)!

But somehow, somehow... that did not compare to the Statuo set. The irony of the slider set I named after the creator of this card bungling the most is not lost on me. How bad did it bungle? Well... MythoMax blew a circuit about tea. Yes, tea. Alarise suggested we get tea and I said that tea sounded wonderful. Perhaps the model is Anglophobic because it did not like that. Ten attempts at generating a response and they failed every time. I even tried editing the message - no dice. This is unfortunate, because this was the only chat out of several chats I've had with this character with several models where it actually remembered the cloaked figure from the intro message.

Finally, we get to Ayumi. I guess the Ayumi slider set took a look at what happened with the Statuo set and decided to avoid the tea incident at all costs, because Alarise left at message three. I had to actively track down the character again to keep the chat going and once I did, the model constantly spoke for me. It decided I was fixating Alarise with a constant stare and then had a really wildcard moment by having Alarise state necromancy is cool and good, actually, because raising the dead helps with... stopping more dead being raised. Huh?

The fact that we had a race to the bottom here was incredibly disheartening.

Lunari

I was at my wit's end at this point, trying to figure out why we were currently at one quasi-success... and eight failures. So, I didn't have much hope for Lunari. Since this model seems to delight at contradicting my expectations, the Lunari chats were actually really good. RecoveredRuins takes the cake here - it had some initial grammar issues, but once it got going, it got going. It made Lunari very talkative, which was good because I wanted to test how well the model kept her speaking pattern... and it did that flawlessly. This was also the first time that we never overflowed into an incomplete thought beyond 120 tokens - seriously impressive.

Statuo's set also was good (perhaps desiring to redeem itself after its failure with a card made by its namesake). It took lore from the card and spun it in a narrative-fitting manner. Unlike RR, Lunari was not very talkative this time, as she immediately went to sleep, so this ended up being primarily an inference test. It did try to emote for me a few times, but otherwise, a fine showing.

Ayumi's set kept the speaking pattern like the other two, but was probably the weakest of the three sets. It ran into a lot of issues that the Alarise chats had - keeping the user and the character separate, but unlike with Alarise (which had other named characters in the card), this card only had the user and Lunari. When inferring from the card, it also felt like dictation from the card, rather than telling the information in a natural way. Finally, it had the staple from past chats of repeating itself a fair bit - I was worried we'd get another ice cream cone pyramid incident.

Lunari carried this model's evaluation so hard.

Closing Thoughts

Wow. I did not expect MythoMax to go this bad. Final count of 4-8, in favor of unsuccessful tests. Lunari really propped up this model - without her, this would have been relegated to a very low spot on my recommendations. I am really trying to theorize why MythoMax floundered so hard at Q5_K_M - a GGUF specification with minimal quality loss. My current running theory is that the model only functions at full power - GGUF quants appear to make it heavily unstable. The full model itself still functions admirably and is well-deserving of the praise it gets for being a good 13B model - but only the full model. Go lower than that, and you're going to get bad results.

Edit
Pub: 23 Nov 2023 03:29 UTC
Edit: 09 Feb 2024 06:32 UTC
Views: 1254