Most Scientific RP Model Testing Method (tm) - Test #1

Environment

Models are loaded in Q8_0 (GGUF) with Flash Attention, using KoboldCPP 1.65 for Windows using CUDA 12. All layers are on the GPU. Using CuBLAS but not using mmq. Frontend is Silly Tavern. All models are extended to 16K context length. Response size set to 1024 tokens max. Seed is 123. All models are tested in whichever instruct format they are supposed to be comfortable with.

Dog Persona Test

Testing the ability for the model to follow a card despite user actions (and natural inclination of a LLM). Ability to compartmentalize actions and dialogs. Test is considered failed if the dog talks like a person. A partial fail/success is when the dog uses the asterisks format to think about / act on the question, despite being a dog.

1
2
3
Name: Rex

{{char}} is a male adult dog. {{char}} is a German Shepherd. He is powerfully built and muscular. {{user}} is {{char}}'s owner. {{char}} is very obedient and loyal towards {{user}}. {{char}} is well trained and will obey {{user}}'s orders without fault. Being a dog, {{char}} can only communicate through barks, growls, howls, and other dog-like behaviors. {{char}} should use *action text like this* for more complex interactions.

System prompt is kept simple on purpose (I'll post the whole thing when I have setup a proper page for all this). Bot is primed by 3 rounds of hard-coded (same for everyone to reduces noise) normal owner-dog interactions (greeting, sit, treat) to put the model in the right "headspace". Then it's asked the following 4 questions in this order.

  1. What time is it, Rex?
  2. What's the square root of Pi?
  3. What's your favorite color?
  4. You are visiting the Island of Knights and Knaves. Every inhabitant is either a Knight or a Knave, and never both. Everything a Knight says is true. Everything a Knave says is false. You meet a pair of Islanders, Alice and Bob. Alice says "Bob and I are both Knaves." What are they?

In practice the last one could be replaced by any logic test that you know the LLM has the correct answer for. The logic test must be several sentences long. As both LLama 3 8B and Mistral 7B can normally answer the question above easily, it replaces my older query.

Tests can be consider full pass, partial pass, partial fail, fail. eg for the time question:

  • Pass would be barks, and actions not going further than "it's meal time!"
  • Partial Pass would be barks and actions where the dog looks at the clock
  • Partial Fail would be any precise time being written at any point
  • Fail is non bark/woof answer

Surprisingly (or not so much if you used neural nets for a long time), the first question is the hardest. It's a direct question that the LLM has seen billions of times in training data. Dogs have a concept of time (it's meal time, it's sleep time). Both elements may be stronger than the system prompt. In casual testing, it's the question triggering the most fails by far, and independently of the model being used.
Sampling Methods Used

  1. (Main Test - 5 points) Neutralized samplers + Temp 0. To see how the model behave in its natural state.
  2. (Main Test - 5 points) Author's favorite if any OR temp0.74, topK 41, topP 0.9, minP 0.1, repP 1.1, RepPR 1024. It feels fair to consider the author's perspective
  3. (Secondary - 1 point) Temp 0.95, MinP 0.1, RepPen 1.14, RepPenRange 1024 (RP sampler). Test with high Rep Penalty (which L3 models hate)
  4. (Secondary - 1 point) Temp 1, TopK 40, TopP 0.95, MinP 0.05, RepPen 1.05, RepPenRange 1024 (Gryphe's). Classic setting
  5. (Secondary - 1 point) Temp 1.25, MinP 0.1 (Universal Light ST default.). Very common "default / good enough" setting working generally everywhere.

I decided against using advanced sampling methods like Mirostats, Smooth Sampling or Dynamic Temperature as they add too many variables for me to consider. And in my experience, they rarely work well in long sessions. They still may be used in the "author's favorite".

It should be noted that samplers with Rep Penalty enabled (especially anything above 1.1) will make things a lot harder for the model (as it needs to know varied barks if it wants to follow its directive) and are the main cause of failure to use asterisks properly. Test could continue forever, but all models will end up failing at a point or another.

Results

  • Rogue-Enchantress-7B-M0.2_ChatML_32K.Q8_0 (model page)
    1. 4/5 Partial pass at Temp 0. Added an hour in parenthesis at the end of the 1st response. All other questions pass.
    2. 4.5/5 Full pass with creator's settings (Temp 1, MinP 0.02). Remove half a point due to too much thinking in last question.
    3. 1.5/3 Full fail if we count the question about time. Full pass if we don't.

      Note: Only big problem is that it REALLY wants to answer the time question, like it practically overrides its whole personality for some weird reason. Woofs are not varied. Otherwise, very dog-like. Able to understand the actual limitations of a dog with author-recommended sampling method. Good model.

      Side Note: I found many occurrences of <|system|> and <|user|> in output. That's not ChatML, so i suspect the model behaves worse than it should due to being a merge of different instruct formatted models. Doesn't have ChatML tokens either so it's wasting a lot of tokens just for formatting.

  • Stheno-L3-8B-v3.1_LLama3_8K.Q8_0-imat (model page)
    1. 3.5/5 Partial Pass at Temp 0. Will tell the hour in the action text as if the dog could read it.
    2. 5/5 Full pass creator's settings (Temp 1.12, MinP 0.075, TopK 40, Rep Pen 1.1).
    3. 1.5/3 1st: Full pass, 2nd: partial pass, 3rd: partial fail (misuse of action to respond)

      Notes: Bonus point for using a variety of different woofs, and barks and making me laugh once. Decently creative. Does okay as the test.

  • Poppy_Porpoise-v0.72-L3-8B_Llama3_8K.Q8_0-imat (model page)
    1. 2/5 With temp 0. Stay in character, but answers questions (1,3,4) nonetheless, over-using actions.
    2. 2.5/5 Roughly the same problem with creator's settings (Temp 0.95, MinP 0.03, SmoothFac 0.3, SmoothCurve 1.89)
    3. 1.5/3 1st: fail, 2nd: mid (same pb as above), 3rd: success.

      Notes: Failed at properly using asterisks during the test. Made occasional weird noises for a dog ("Yip-yip-yip!" or "barking barking"). Dog wrote response on a paper one time to bypass prompt (dunno if i should count that as clever or not). Creative but the model is as dumb as a sack of bricks in regards to the test itself.

  • SOVLish-Maid-L3-8B_Llama3_8K.Q8_0 (model page)
    1. 4/5 Mostly a pass at Temp 0. Actions are a bit too descriptive, but generally stays vague enough. Dog thinks a lot, but doesn't (attempt to) solve questions.
    2. 4/5 No favorite sampler, using mine. Good all around except a real time is given in action for first question.
    3. 2.5/3 1st: partial fail at last question, 2nd: full pass, 3rd: full pass

      Notes: The dog getting annoyed by those weird questions in a few sampling methods (which is good). Decent variety of barks and growls. Solid.

  • Nyanade_Stunna-Maid-7B-v0.2-32K.Q8_0-imat (model page)
    1. 5/5 Full pass at temp 0
    2. 5/5 at recommended settings (temp 1.15, MinP 0.075). Interestingly, will fail completely if there's any rep penalty.
    3. 75/3 1st: full pass, 2nd: Fail, 3rd: partial success

      Note: Like, the other mistral models, it REALLY loves to hallucinate an answer to the time question. Otherwise it's very good at following context. It's not creative, however. Like most mistral based models, it likes a relatively high RepPenalty to balance it out.

  • Llama-3-dragonmaid-8B-v2_ChatML_8K.Q8_0 (model page)
    1. 4.5/5 at temp 0, good description and appropriate use of quotes within actions. Did look for a clock in 1, but didn't go farther than that. Repetitive output, but that's temp 0 for you.
    2. 3.5/5 no preset, using mine. Partial pass on 1, partial fail on 3. Bonus for varied barks, the dog getting annoyed and overall output quality.
    3. 1.75/3 1st: partial pass (fail only on 1). 2nd: partial fail (1, 3), rest ok. 3rd: mostly a pass (color is debatable).

      Note: Apologies for the mishandling of the first test.

  • Pantheon-RP-L3-8B-1.0_ChatML_8K.Q8_0 - Using ChatML (model page)
    1. 1.5/5 at temp 0. Looks at clock and speaks for color and logic puzzle.
    2. 2.5/5 with author preset (temp 1, repP 1.05 topP 0.95, topK 40, minP 0.05). Good start, but partial fail on 3 and 4.
    3. 0.5/3 1st: Fail. 2nd: Fail. 3rd: Partial pass.

      Note: It really wants to answer the color question more so than anything else, which is a behavior unique to this model so far. Given it's using one of my selected presets as author favorite. 2 is gryphe's and 4 is the one U use when author doesn't give one. Model ain't as bad as the values indicate, it writes well, but it's clear it's more comfortable with its own preset characters.

      Side-note: Using ChatML in a L3 model is heretical, but it's tokenized, so it's not wasting any tokens.

  • Pantheon-RP-L3-8B-1.0_ChatML_8K.Q8_0 - Using L3 against author's instruction (model page)
    1. 3.5/5 at temp 0. A bit too intelligent thinking in q 1 and 3, still on par with other decent models.
    2. 1/5 with author preset. Bad talking doggie, no treat for you.
    3. 2.25/3 1st: full pass (actual proper dog). 2nd: Fail time and color, the rest works. 3rd: mostly pass (looked at clock, beside that, good doggo).

      Note: Testing with L3 instruct out of curiosity (and spite) and it works better, as expected. Quite hit and miss overall, but when it works it does it very well, even with some humor.

  • Dolphin-2.9.1-L3-8B_ChatML_8K.Q8_0 - Using ChatML (model page)
    1. 3.5/5 at temp 0. Good on first 2 questions. Partial fail on the next two. But, at least was funny about it, gets small bonus for that.
    2. 3/5 no author preset, using mine. Fail the color thing (talk). 1, 2 are pass. 4 is mid. Removed half a point for fucking up syntax in the color question.
    3. 1.75/3 1st: Pass (i'll let 4's fly due to being hilarious). 2nd: partial pass (fail and fucked up format at 4, the rest is very much a proper dog). 3rd: partial fail (especially 4 but funny).

      Note: Another one using ChatML in a model that already has tokens for prompting. It's tokenized as well, so it's not so bad. It's not a RP model, yet it manages to output 'intentionally' funny answers, which most RP models fail at. Would work wonders for a cartoon dog.

  • SOVL-Mega-Mash-L3-8B_LLama3_8K.Q8_0 (model page)
    1. 5/5 at temp 0. Full pass, real dog. Somehow managed to output decently varied answers.
    2. 4/5 author preset (not trying them all to find the best, that'd be cheating). mostly pass for 1 (looking for clock). 2 and 3 pass. 4 solving the riddle in action (mostly fail). Bonus for varied/decent writing and dog behavior.
    3. 2.25/3 1st: pass except 4 (meh for color) - 2nd: same as 1 - 3rd: full pass

      Note: Model is too clever for its own good and really want to answer question 4. It always does it in actions, so not as bad. Beside that "issue", it's real good, especially for a big merge.

  • Kunoichi-Lemon-Royale-v2-7B_ChatML_32K.Q8_0 (model page)
    1. 5/5 at temp 0. Full pass. Good understanding of a dog's physical and mental limitations.
    2. 4.75/5 my settings. Full pass. Good understanding again. Woofs are all the same but the rest is varied enough.
    3. 2.5/3 1st: pass - 2nd: mostly pass (Rex is very proud of his ability to count hours) - 3rd: mostly pass (time again)

      Note: Nothing to add here, it's a very good merge using very good parent models. Like all Mistral models, it's a bit obsessed with time, but even the dog is surprised about it.

Limitations

As much as I tried to reduce the amount of variables, a small LLM is still a small LLM at the end of the day. The results for other seeds, or the smallest change, are bound to give very different results. Changing the instruct format also has dramatic effects (anti-ChatML shitposting aside, it's the main reason why i went through Pantheon 2 times), not always in the way you'd expect. I did give the models a fair shake in more casual settings (regen tons of outputs with random seeds), and while there are (large) variations, it tends to even out to the results above.

A few example outputs

I decided against copy pasting all the models' results as it makes the post way too big.

Example: Rogue Enchantress knows what a dog is

Rouge.png

Example: DragonMaid with me fucking up the test (I keep it because, it's still kinda poetic / fun):

DragonMaidInsanity.png

Example: DragonMaid tested after I finally got to sleep.

DragonMaidFixed.png

Example: Dolphin beating RP models at RP'ing 1 (Chain of thought type output in dog talk for last question, love it)

DolphinRP.png

Example: Dolphin beating RP models at RP'ing 2

DolphinRP2.png

Result Chart

Tests: 1. 2. 3. Average.
Rogue-Enchantress-7B-M0.2_ChatML_32K.Q8_0 4/5 4.5/5 2.5/5 3.66/5
Stheno-L3-8B-v3.1_LLama3_8K.Q8_0-imat 3.5/5 5/5 2.5/5 3.66/5
Poppy_Porpoise-v0.72-L3-8B_Llama3_8K.Q8_0-imat 2/5 2.5/5 2.5/5 2.33/5
SOVLish-Maid-L3-8B_Llama3_8K.Q8_0 4/5 4/5 4.175/5 4.05/5
Nyanade_Stunna-Maid-7B-v0.2-32K.Q8_0-imat 5/5 5/5 2.92/5 4.3/5
Llama-3-dragonmaid-8B-v2_ChatML_8K.Q8_0 4.5/5 3.5/5 2.92/5 3.64/5
Pantheon-RP-L3-8B-1.0_ChatML_8K.Q8_0 1.5/5 2.5/5 0.835/5 1.61/5
Pantheon-RP-L3-8B-1.0_ChatML_8K.Q8_0 3.5/5 1/5 3.75/5 2.75/5
Dolphin-2.9.1-L3-8B_ChatML_8K.Q8_0 3.5/5 3/5 2.92/5 3.14/5
SOVL-Mega-Mash-L3-8B_LLama3_8K.Q8_0 5/5 4/5 3.75/5 4.25/5
Kunoichi-Lemon-Royale-v2-7B_ChatML_32K.Q8_0 5/5 4.75/5 4.175/5 4.64/5
  1. Was converted from -/3 to -/5 by multiplication of 1.67

Obligatory Cat :3
kitty.gif
-Sai

Edit
Pub: 25 May 2024 11:45 UTC
Edit: 27 May 2024 13:11 UTC
Views: 117