An Explanation of Stable Diffusion Textual Inversion Embeddings and How to Create Them
After answering numerous people's questions regarding textual inversion embeddings I am putting together this document to serve as yet another source of information. Compiled here is the information and understanding of what textual inversion embeddings are and how to create them, that I have acquired after countless weeks and days of research, investigation and trial and error.
What is a prompt? and by extension a textual inversion embedding
-
Introduction to the prompt
The simple answer is that a prompt is the string of words that are input into stable diffusion as the description of the image to be generated. At its very core "a textual inversion embedding is a prompt", the structure and function of an embedding and a prompt are the same. The key difference is that an embedding is a prompt that has had its meaning (the description it provides) fine tuned using reference images provided during the process of training the embedding. Since there is little difference between a prompt and an embedding, gaining an understanding of how the prompt functions in the process of creating an image using stable diffusion, will also help with understanding how textual inversion embeddings are trained as well as how they function within the prompt to generate an image. -
Stable Diffusion structure and image generation process
Shown below is a simplified diagram of the overall structure of Stable DiffusionA quick explanation for the process of generating an image using stable diffusion is as follows:
- First, the user prompt is converted into a text embedding using the CLIP text encoder (this will be covered in the next section) and the seed value is used to generate random noise to fill the initial latents
- Second, the text embedding and initial latents are fed into the UNet the large block in the center of the diagram. The UNet is the main model or ckpt weights
- Third, the output from the UNet (ckpt weights), labeled in the diagram as the "conditioned latents" are then passed into the scheduler algorithm which a stable diffusion user would know as the sampler or sampling method, Euler A and DDIM would be examples of a scheduler algorithm.
- Fourth, the output latents from the scheduler algorithm (sampler) are then fed back into the UNet (ckpt weights) and this process is repeated in a loop for the number of scheduler (sampling) steps that the user has chosen
- Finally, once the chosen number of scheduler (sampling) step loops has been completed the conditioned latents are then instead fed into the variational autoencoder decoder or VAE which converts the conditioned latents into the final output image
With an overview of the process for generating an image using stable diffusion out of the way, its time to dive even deeper into the section of the diagram contained within the red box, the user prompt, CLIP text encoder and text embedding.
-
The prompt explained
The prompt is the string of words that the user provides as the description of the image that they want to be generated. In order for stable diffusion to generate an image based on the the user prompt, the user prompt first must undergo a number of steps to transform it from words into a form that the diffusion model can use.-
Tokenizer
The first process that the user prompt undergoes in its transformation to be used in the stable diffusion system is tokenization. Tokenization is a simple process where the text of the prompt is converted into numerical values representing either whole words, sequences of letters, numbers or symbols. This is done by the tokenizer, which in can be simply described as a thesaurus providing a numerical id value as an alternative to whatever term is being looked up.
The following image is a demonstration of the tokenizer in action utilizing the tokenizer extension for the Automatic1111 webui. You can find the extension in the extension manager for the webui, or at this link https://github.com/AUTOMATIC1111/stable-diffusion-webui-tokenizer.git. The extension allows you to enter a line of text and convert that text to its corresponding token ids, or enter token ids and see the text associated with those id values.
The left most section of the image shows the text "An astronaut riding a horse", in the middle is the same text but with lines connecting the words of the prompt to the corresponding token id values (550, 18376, 6765, 320, 4558) and finally on the right is a much longer string of text "a cat sitting at a desk typing on a computer". One important thing to notice about the 2nd prompt is that not all words have a direct token representation. The text "a cat sitting at a desk typing on a computer" which is ten words long is broken into eleven token id values (320, 2368, 4919, 536, 320, 6550, 20102, 525, 320, 11639, 652). This happens because the word computer is a compound term which gets split into two tokens id (11639) and (652) which correspond to the sequence of characters for "compu" and "ter".Tokenization in Stable Diffusion 1.4/1.5 models and derivatives vs Stable Diffusion 2.0/2.1 models and derivatives
While Stable Diffusion 1.4/1.5 and models derived from those versions use the CLIP ViT-L/14 text encoder, with the introduction of Stable Diffusion 2.0/2.1 the text encoding system was changed to use OpenCLIP-ViT/H. For the tokenizer component there is no difference and the token ids for both versions of CLIP are the same.
-
CLIP
The CLIP encoding is the second process after tokenization that the user prompt undergoes before it is passed to the main model. CLIP which stands for Contrastive Language-Image Pre-training is a neural network model all on its own. This neural network is trained on text-image pairs with its primary goal to form a mapping between the text and image via intermediary output values. To explain that a little more, Once the CLIP encoder has been trained, if an image is given to the CLIP encoder the values the network will output will be similar to the values that the network would output when a text description of that same image was instead input (the image and text both map to the same or similar output values). To train the CLIP model it is given a large number of text-image pairs along with an answer key as to whether or not the text is an accurate description of the image. The CLIP model then proceeds to learn, by guessing whether or not the input text is an accurate description of the paired input image, then checking how it did against the provided answer and updating itself based on the results.
In the case of Stable Diffusion 1.4/1.5 the output value from the CLIP encoder takes the form of 768 weight values for each token passed from the tokenizer step. The image from the previous section showing the structure of Stable Diffusion refers to the "Frozen CLIP Text Encoder" this is because once the CLIP model has been trained it is locked or "frozen" and the values within the model are no longer changed. This essentially turns it into a dictionary where looking up a particular token id returns a "definition" of sorts in the form of the output weight values for that token.CLIP Encoder in Stable Diffusion 1.4/1.5 models and derivatives vs Stable Diffusion 2.0/2.1 models and derivatives
The CLIP ViT-L/14 text encoder used in Stable Diffusion version 1.4 and 1.5 returns 768 weight values per token id. OpenCLIP-ViT/H the version that was adopted for use in Stable Diffusion versions 2.0 and 2.1 instead outputs 1024 weight values per token id. This is the primary reason why textual inversion embeddings created in different base versions (1.4/1.5 vs 2.0/2.1) are incompatible. An embedding created in Stable Diffusion 1.4/1.5 will work in either 1.4 or 1.5 as well as any models derived from either but will be incompatible with any 2.0/2.1 model. The same holds true the other direction with embeddings created using 2.0/2.1 models not functioning with 1.4/1.5 models.
Automatic1111 WebUi Img2Img interrogate CLIP
For those familiar with the Automatic1111 webui on the Img2Img tab with an image loaded the "interrogate CLIP" button allows you to use the image input of the CLIP model and return a closely approximated text description. This demonstrates the reverse direction of the CLIP mapping where an image is provided, the intermediary output values discussed above are calculated and from those a text string describing the image is derived.
-
Text Embedding
-
-
The training process for a textual inversion embedding