Helpful things to remember when choosing a model

For those new to the Large Language Model (LLM) scene, it can seem daunting when people throw around fancy tech-speak left and right, and all you wanna do is talk to the funny robot. This rentry seeks to help you understand some of these concepts. Because everyone deserves to know what they're downloading.


For a list of open-source LLMs. see TheBloke's HuggingFace page, there are hundreds to choose from. If you want to know which ones perform the best, visit the Open LLM leaderboard.

Let's start off with an example. Take a look at this model name:

Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GPTQ

You might be wondering, "what the heck is this word salad I'm looking at?" But everything here has a meaning. Let's go over each piece of this model name.

Wizard-Vicuna - The base model name. In this case, it's WizardVicuna, a model that combines the data-handling of WizardLM with the conversational ability of VicunaLM.

13B - The number of Parameters, in billions (see the Parameters section below). It's essentially how many "brain cells" this model has.

Uncensored - If you've ever chatted with a censored/filtered model, you've certainly had the pleasure of running into "As an AI model..." now and then. An uncensored model is built without these moral limitations in mind. Use it at your own risk.

SuperHOT 8K - This model can use 8K context tokens. See the Token section and SuperHOT section below.

GPTQ - This model is quantized, which means it's compressed into a much smaller form than its original. This lets you run larger models with only a fraction of the GPU requirements.



Model name

  • Falcon, Vicuna, Orca, Guanaco, Alpaca, etc. These are the base model names. These indicate what the models are trained on, and what they're typically used for (chatting, instruction, etc.), see this page for more details. Many of them are derivatives of the LLaMA model, hence the animal names.
  • Many are also combined models, which is why you'll often see multiple names in one (Pyg-Guanaco, Wizard-Vicuna, etc.)
  • Sometimes you will see models with a version number in the model names - for example, Vicuna 1.3 or Airoboros 1.4.

Parameters

  • The B in the model name (6B, 7B, 13B, etc.) refers to the number (in Billions) of parameters. Think of them like neurons in a brain. Parameters consist of weights (these controls neuron connection strengths) and biases (these controls neuron output). The weights and biases of a model work together, to control its overall output. Essentially, the number of parameters relates to how complex the model is, and its general "intelligence".

Precision

  • When a model mentions 4bits, 8bits, etc, this is the level of precision. A lower number means the model will have slightly less accurate outputs, but will have a far smaller file size and far lower GPU (VRAM) requirements. Most times, the size-accuracy trade-off is worth it, which is why you will see a lot of models quantized (compressed) from 16bits down to 8bits or 4bits.
  • For example: a full-precision 6B model typically requires around 16GB of VRAM. A 6B model quantized to 8bits requires around 10GB of VRAM. And a 6B model quantized to 4bits requires around 6GB of VRAM.
  • fp16 and bf16 (bfloat16): These are floating-point storage formats. GPUs use these for complex calculations during model training. These are half-precision, compared to FP32, which is full-precision. (A mix of precision levels can also be used, this is called Mixed Precision training.) Each format has their own benefits and drawbacks, see this post for details.

HF

Quantization

  • When something has been quantized, its data has been compressed to reduce its file size and system requirements.
  • q4_0, q5_1, q8_0, q8_1 etc:
    • q refers to the fact that the model is quantized to reduce its size.
    • 4, 5, 8, etc. refers to the number of bits of precision.
    • _0, _1, etc. refers to the quantization version. These are minor adjustments that slightly increase the accuracy of the model but also its size.
  • k_quants
    • You'll see this represented as q2_K, q3_K_M, q4_K_S, etc. These are simply further adjustments to the quantization method, which help reduce its size footprint even further.

GPTQ

  • A model with the GPTQ label means it has been quantized to 4bits using either AutoGPTQ or GPTQ-for-LLaMa. This dramatically lowers the GPU requirements, at a small precision cost.

GGML

  • Models with this label use GGML quantization, which allow the model to run on CPU (RAM), using llama.cpp. No expensive GPU necessary. A 6B model, for example, will require about 4GB of RAM, but no VRAM is required. Recent developments in llama.cpp have also allowed GGML models to utilize a GPU/CPU split, which means you can run large models (13B+) on even lower-end machines at a decent speed.

QLoRA

  • QLoRA uses bitsandbytes to efficiently quantize models, which reduces their file size, making fine-tuning easier.

Perplexity

  • Perplexity measures how "confused" a model is while providing a response. Lower values, therefore, are better. For example, a perplexity of 8 means the model has 8 different potential choices for the next word in a given context. A model with a lower value means it is able to predict the next word with greater accuracy.

Fine-tuning

  • This means to train and customize a base model, either to improve on it or to make something new and different. LoRAs are the easiest way to do so.

LoRA

Epoch

  • An epoch is one full cycle through a given data set, when training a model. You can define an epoch as either the entire data set, or a portion of it.

Learning rate

  • This controls how fast the model is changed during training. Written in scientific notation, for example 3e-4 means 3 * 10^-4 means 0.0003. The higher the value, the faster the learning rate, but the more likely it is lose prior data in the model.

Loss

  • When training a model, loss is the number that measures the difference between the output of the model you're training, and the dataset you're feeding it. If it's 0, that means the model has forgotten its original data and is outputting exactly what you fed into it, but this is bad if you want the model to have some form of creativity and originality in its answers. Generally, try to keep the loss above 1.0, unless you're training a very precise model.

Merged weights

  • This means to take the weights of the original model, and the weights of the LoRA, and combine them into a single set of weights.

SFW bias

  • Some models include "Uncensored" in their name to indicate that their training has not been biased away from NSFW results. Because they are not restricted in their results (no "As an AI model..."), uncensored models are generally more intelligent and responsive.

CoT

  • Chain of Thought (CoT) prompting encourages the bot to explain its reasoning when giving answers.

SuperCOT

  • SuperCOT is a LoRA that utilizes CoT to improve the reasoning capability of models.

Mode

  • Instruct/Chat/Storytelling: When a model has one of these labels, it means the model is specialized for that purpose. Instruct models specialize in Q/A formats, similar to ChatGPT. Chat / dialogue / roleplay models specialize in natural language conversations, similar to Character.AI. And storytelling models specialize in crafting long in-depth narratives similar to NovelAI/AIDungeon.

Token

  • In the context of language models, a token is a common sequence of text characters. 1 token is approximately 4 characters. This tokenizer will help you understand how a piece of text translates into tokens used by language models.
    • Context tokens - this is the number of tokens that a model can keep in memory. Most open-source models use about 2K context tokens. This is why, when making a character for a chatbot for example, it's best to keep descriptions short and simple, in order to use fewer context tokens. Recent advancements have allowed open models to use 8K context tokens and beyond (when you see a model with "Long" in the name, this is what it refers to), although more context requires more GPU.

Shard

  • When a model is sharded, its data has been divided into smaller blocks, so that loading the model uses a lot less memory.

Loaders

These are the libraries that the models use to run. If you are wondering which one you should use, ExLlama and ExLlama_HF if you have a big enough GPU, and llama.cpp otherwise. Use the other ones only if you have a reason to.

Transformers

  • [Transformers](https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) is the biggest and most famous library for running Large Language Models, and possibly one of the oldest. It was created by a company called Hugging Face, which is where we usually download our models from. It supports many models and has many features, but it's slow and wastes GPU memory.
  • GPTQ-for-LLaMa is a GitHub project where a guy called qwopqwop200 brilliantly showed how we can run the LLaMA models in 4-bit precision using only 25% of the original GPU memory requirements, while retaining most of accuracy. GPTQ was invented by a group of academics, and qwop adapted it for LLaMa and made it practical to use.
  • AutoGPTQ is an attempt at standardizing GPTQ-for-LLaMa and turning it into a library that is easier to install and use, and that supports more models. Unfortunately, it is quite a bit slower than GPTQ-for-LLaMa and ExLLama.
  • ExLlama is a highly optimized library for running GPTQ models. The author is very knowledgeable in low-level GPU programming, and the result is an implementation that is VERY fast and uses much less memory than GPTQ-for-LLaMa or AutoGPTQ.
  • ExLlama_HF is a way to use ExLlama as if it was a transformers model. Transformers implements many parameters like top_k, top_p, etc, that this library reuses without any modifications. It was contributed in a recent Github Pull Request (PR) by Larryvrh.
  • llama.cpp is a library created by a guy named Georgi Gerganov showing that you can run Large Language Models with good speed without any GPU. Whereas all of the above loaders use GPTQ, this one uses its own model file format (GGML). Recently, llama.cpp was updated to include GPU offloading, vastly increasing Recent developments in llama.cpp have also allowed GGML models to utilize a GPU/CPU split, which means you can run large models (13B+) on even lower-end machines at a decent speed.

Transformers loads 16-bit or 32-bit models that look like this: pytorch\_model.bin or model.safetensors. GPTQ-for-LLaMa/AutoGPTQ/ExLlama/ExLlama_HF all load GPTQ models (a different format).

Inference

  • Inference is simply the model processing your input and turning it into an output. When people talk about inference times, this is what they're referring to.

Monkeypatch

  • Monkeypatches are small bits of code that you apply to a program to resolve a bug or extend functionality, but only to the program on your own machine. It's essentially the equivalent of modding a video game, as opposed to the game devs pushing out an update for all machines. It's a quick and dirty fix and may have unexpected results.

LangChain

  • LLMs by themselves are "isolated". They can only guess when it comes to very specific things (like programming or medical knowledge) that lie outside of their dataset. LangChain helps improve LLMs by connecting them to outside sources of information.

CUDA

  • CUDA is the toolkit necessary to run an AI model on NVIDIA GPU cards.

ROCm

  • ROCm is the equivalent for AMD GPUs. Note: cannot be used on Windows.

Pip

  • Pip is an used to install Python packages from the Python Package Index (PyPI). It's included with Python by default. You'll see this a lot when installing models manually.

Conda

  • Conda is an open-source installer that manages packages from the Anaconda repository and the Anaconda cloud. It's language-agnostic, meaning it can be used with languages outside of Python. A Windows-friendly minimal installer is available at miniconda.

PyTorch

  • PyTorch is a tensor library used for model training using GPU and CPU. An alternative to TensorFlow.

Tensor

  • In data science, a tensor is (to boil it down as simply as possible) an array of data values.

Safetensors

  • A way to store tensor data safely in .safetensor files. The older and less-safe way is to use .pt or .ckpt files, which are .zip files that contain python code, and could potentially execute bad pickles (a funny name for malicious code). Read more about .safetensor and .ckpt files here.

TheBloke

  • The absolute GOAT.
Edit

Pub: 08 Jul 2023 00:34 UTC

Edit: 11 Jul 2023 02:48 UTC

Views: 108