A simple dictionary for AI Roleplay

Hello, hello! If you're here, it's probably because you've recently developed an interest in AI roleplay/chatbots, and you're feeling a bit lost. That's completely understandable. The world of LLM (AI, if you prefer) roleplay can be overwhelming when you're just starting out. There are so many new terms, specificities, and things to learn. Typically, people learn by poking around and reading various documentation. But here's the problem: much of that documentation can be too technical for beginners.

But fear not! I've made this dictionary as user-friendly as possible. It's designed to be a simple starting point for newcomers. Of course, as I mentioned, it's just that, a starting point. I won't delve into too many details, but you'll find more specific resources if needed. I encourage you to keep reading and exploring. I've aimed to make it as universal as possible, although it's tailored with Venus Chub in mind. Don't worry, the concepts remain largely the same with other front ends. So, are you ready? Let's dive in!

First steps

Front end
This is the place where you chat. Usually a website, like Venus Chub.

Back end
The behind-the-scenes. In this context, it's usually the API and the model. Not something you can access unless you run it locally... but if you were running it locally you wouldn't be here.

API
Probably one of the terms you will see the most when searching about AI roleplay. But, what is it ? Good question. It's simply the thing that connects you to the model. You can find API keys on various websites, and you need a key to be able to chat with a bot.

Model
If I say the 'AI', it may be easier to understand. Even if, technically, AI is not the right term. It would be LLM (For Large Language Model). Basically, it's what makes everything work. The model generates the answer you get from the bots. You have a lot of different models available, free or paid, censored or uncensored. If you don't want to bother with an API key, Venus Chub does have its own models, paid, with the Mars and Mercury subscriptions. If you want more informations on models, you have a guide written by StatuoTW about available models, their pros, and their cons.

Character cards
It's your character, or your bot, if you prefer. Every information of your character is in its character card.

Preset
A preset is a premade set of prompt structure and generation parameters.

Input
The input is what is sent to the model.

Output
The output is what is received from the model.

V2 Card
A V2 card is a card that allows to use alternate greetings, Pre/Post History override and embedded lorebooks.

V2 Card

The prompt is what is sent to the LLM. It includes:

  • Pre-History
  • Scenario
  • Persona
  • Lorebooks
  • Chat history
  • Author Note
  • Post History

And everything else.

About the models

Context size
The context size of a model is like its memory. If you set it higher than that, it won't work properly. You will get errors, weirdness, in a word, nonsense. We count the context size in tokens. However, you can continue chatting even if you go over the limit. Let me explain ! In the majority of front ends, you have access to generation settings. This is where you need to keep you context size under the limit or you will have problems. If your limit is correct in the settings, you will be able to chat indefinitely. The model will simply not remember the oldest temporary tokens.

Tokens
Nice transition, what is a token ? It's a unit of information. In other words, it's a part of a word, or a symbol. Usually, a word can consist of 2-3 tokens, but there isn't a strict rule. It depends on the word, or the symbol, even on the model. Every models have a different way to count tokens. There are two kinds of tokens: permanent and temporary.

Permanent tokens:

These are the tokens that will always be sent to the model. They consist of several elements:

  • System prompt or pre-history
  • Jailbreak or post-history
  • Persona
  • Character definition
  • Scenario
  • Character note
  • Prompt note

Temporary tokens

As you guessed, these are what are removed when you exceed the context size. The oldest temporary tokens are removed first. Those are the parts using temporary tokens:

  • Chat history
  • Example messages
  • Lorebook (usually)

The number of parameters
If you've already searched for information about models, you may have come across terms like 7B parameters, or 70B parameters, or any number of parameters. This refers to how complex a model is. A model with a higher number of parameters can understand more things and be more logical, but it doesn't necessarily mean it's better.

Prompt structure

Pre History Instructions / System prompt
It's where you instruct the bot on how it should write.

Post History Instruction / Jailbreak
This is a bit more complexe, because post history instructions and jailbreaks are not the same thing. They don't serve the same purpose:

  • Post history instructions are 'strong' instructions. The most important ones. And, no, you can't put all your instructions here. Well, you could, in theory. No one will stop you, but the messages you will get from the model may not have the best quality.
  • Jailbreaks are specific to censored models. Basically, it's gaslighting. It's used to allow the chat to walk into censored territory by making the model think it is allowed to go there.

Impersonation prompt
Sometimes, you can want to take a break, and let the model talk for you. Well, when you press the right button, this prompt will be sent, and the model will write a message as you. You can keep the default one.

Prompt note
This is where you can write any text you want, at any depth you choose. It is a little technical, but it's not absolutely necessary. Just remember, it counts toward your permanent tokens, so don't fill it completely. There are many ways to use it, like nudging the chat in a certain direction.

Depth
The order in which your entire prompt is sent to the model has a big influence on its input. The things closer to 0 are more important for the bot, with 0 being the message you sent. Depth simply refers to how deep you are.

Generation parameters

Now, we are getting a bit more technical. I will try to be as beginners friendly as possible. If you need generation settings to start with, you have already Glub's recommended settings for Mars and for Mercury, Blue's recommended settings for NovelAI, and some presets, from the Chub Discord's moderators and server helpers.

Temperature
Temperature determines how random an output will be. With a lower (colder) temperature, the model will stick to what it already knows from its training data. A higher (hotter) temperature allows the model to be more 'creative' by allowing it to deviate from its training data. If you set it too high the messages you will get may not make sense anymore, or the bot may start rambling.

Repetition penalty
This parameter penalizes the direct repetition of words within the generated text. When the repetition penalty is set too low, the model may produce repetitive sentences, sometimes even in succession. Increasing the repetition penalty encourages the model to use more varied vocabulary and avoid repetitive language. However, setting it too high may lead to overly flowery or poetic speech, just like Shakespeare.

Frequency penalty
The frequency penalty parameter penalizes words based on their frequency of use within the input text as a whole. Each word is assigned a score that decreases each time it appears, making it less likely to be used again. Setting the frequency penalty too high can cause the model to forget common words such as "the" and "is," negatively impacting the coherence and fluency of the generated text.

Presence penalty
The presence penalty parameter tracks whether a word has already been used once in the input text. If a word has already appeared, the model will attempt to avoid using it again. This penalty encourages diversity in vocabulary usage and helps prevent the repetition of words within the generated text.

Top P
Each token has a probability of appearing. The Top P controls how many tokens the model will consider based on the cumulative probability. Tokens are chosen randomly based on their individual probabilities. Need an example ? No problem ! For roleplay, just replace the students' answers with tokens.

Let's say you have five students who had to answer the same question. The model needs to select one answer, with a Top P of 0.70.

Student A's answer has a probability of 0.03, student B's = 0.50, C's = 0.07, D's = 0.15, and E's = 0.25.

The model will choose the possibilities randomly and add them until it reaches the Top P, here, 0.70.

So, 0.50 (B) + 0.07 (C) + 0.15 (D) = 0.72

The model will not use the answers of students A and E.

OR

0.50 (B) + 0.03 (A) + 0.07 (C) + 0.15 (D) = 0.75

The model will not use the answers of student E.

Top A
Each token has a probability of appearing. Similar to Top P, Top A controls how many tokens the model will consider based on the cumulative probability. The difference is that tokens are selected in decreasing order based on their individual probabilities instead of randomly. An example ?

Let's say you have five students who had to answer the same question. The model needs to select one answer, with a Top A of 0.70.

Student A's answer has a probability of 0.03, student B's = 0.50, C's = 0.07, D's = 0.15, and E's = 0.25.

The model will first rank them from the highest probability to the lowest: B's = 0.50, E's = 0.25, D's = 0.15, C's = 0.07, A's = 0.03.

Then, the model will choose the possibilities according the the probability and add them until it reaches the Top A, here, 0.70.

So, 0.50 (B) + 0.25 (E) = 0.75

Since 0.75 is greater than 0.70, the model will not use the answers of students C, D and E.

Top K
The Top K controls how many tokens the model will consider based on how likely they are. It selects the top K tokens with the highest probabilities. Example ?

Let's say you have five students who had to answer the same question. The model needs to select one answer, with a Top K of 3.

Student A's answer has a probability of 0.03, student B's = 0.50, C's = 0.07, D's = 0.15, and E's = 0.25.

The model will first rank them from the highest probability to the lowest: B's = 0.50, E's = 0.25, D's = 0.15, C's = 0.07, A's = 0.03.

Then, the model will choose the 3 possibilities with the highest probabilities (because Top K = 3).

So, B, E and D.

The model will not consider the answers of students A and C, as their probabilities are lower than those selected in the Top K

Minimum length
Venus Chub's APIs (Mars and Mercury) allow messages to have a minimum length. If you prefer, how many tokens the bot's messages should be at least.

Max new tokens
This determines the maximum length of a character's message. Again, be cautious. You can set it very high, but the model may try to fill the space with nonsense or end the roleplay prematurely.

Context size
The context size of a model is like its memory. It determines how many tokens it can process before forgetting informations. If you set it higher than the model's limit, it won't work properly, if at all. If you set it correctly, only the oldest part of your chat history will be forgotten.

Chat settings

Lorebooks
An encyclopedia for your world that your character can use.

Group chat
Quite obvious. When you want to talk to more than one bot at once, you create a group chat. The characters can also interact with eachother.

Persona
The persona is your character. Not the character, as in the bot, but the character as you. It's what you look like, the things everyone know about you. Your own little character card.

And... This is all. Remember, I tried to keep everything as simple as possible, without going into details. It's just to avoid feeling completely lost while learning about AI Roleplay. You may have noticed that I didn't talk about character cards nor lorebooks, it's simply because you already have a lot of resources about both.

Have fun !

Edit
Pub: 29 Feb 2024 01:09 UTC
Edit: 18 Jul 2024 14:56 UTC
Views: 2251