Transformation using Novel AI image generator

What and Why?

This tutorial is a technique I've been developing that will allow you to transform one character into another in a relatively smooth and consistent way. This technique can change a guy into a girl over a series of images, or turn a girl into a different girl, it can be used to slowly change poses, It can be used to have a man turn into a werewolf. (I haven't tried this but I can't see why this wouldn't work) Here let me show you an example of it in action ...

Alt TagAlt TagAlt TagAlt TagAlt TagAlt TagAlt TagAlt TagAlt TagAlt TagAlt Tag

That's impressive! However, there're a few important things to note.

  • It's not perfect. It's important to remember that you are working with a volatile AI whose main mechanism of creation is random chance. Specifically each image is attached to a seed which determines a lot of things when combined with your prompt. This is relevant because you aren't going to get what you want perfectly every time.
  • This technique struggles with several things. Finger gestures, some clothing items, and other small details that you will discover based on your individual needs.

So let's get started at the beginning ...

Pre-technique jitters

The works best if you have a good system for retaining consistency of style in your generations. That's a big ask but you don't need to get 100% there, but as close as possible will help. This is because there will be something that helps lasso the generations into keeping the same style. So let's start there ...

You have several tools to reign in a specific style but you need to be careful:

  • image2image is needed for something else so you can't use that.
  • Vibe transfer is really helpful, however, I would suggest this not to be used because it will often pull in details from the art used. You are welcome to use it if you can get it to work but keep in mind that it will cause weirdness. It's also used for another thing in this technique but multi vibe is a thing so it's up to you really.
  • Artists and style tags are really where's it's at and to help with that I have a few resources.

I have found that when combining artists, you can get a consistent style using 3 or less artists, but going above that you can get weird details in different styles. Consistency is also easier if you use artists and styles that often have things you're trying to create in your art. Bottom line is, try your best for something consistent and keep consistency in mind as we go forward. However, this section is really all up to you and what your tastes are.

Vibe transfer information

When using Vibe transfer it's best to use it with tags to guide it a bit more. It's best to use both together to aim for consistency of style.

For me, I'm going to use these artists and style tags:
{{{artist:slugbox}}}, artist:aleriia_v, {{artist:julie_bell}}, toon

At this point you can craft the entirety of the origin characters prompt. There's no real specific things to say here ... the starting point is whatever you want it to be. I like to put empty hands and white background to ensure that I get both of those.

My entire initial prompt looks like this:
1girl, solo, full body, mature, straight-on, spread arms, airplane arms, dynamic pose, beautiful eyes, large eyes, blue_eyes, small iris, nervous, eyeglasses, :D, long twin braids, brown hair, {{empty hands}}, {{{White background}}}, {{{artist:slugbox}}}, artist:aleriia_v, {{artist:julie_bell}}, toon, flat chest, wide shoulders, wide waist, weak, frail, emaciated, baggy aran sweater, suspender jean skirt, shorts under skirt, sneakers, detailed, beautiful lighting, ambient occlusion, best quality, amazing quality, highres, absurdres, incredibly_absurdres

UC information

It's best to have whatever undesired content tags that you want. You can use it to keep things from showing up, but I often just keep it at whatever gets me the best results. Best to use your judgement on this.

Here is the resulting character:
Alt Tag
Isn't she adorable!

We're almost there ... we just need to do one more thing.

We have to craft the end prompt. For me, I'm going to turn the above character into a sexy punk rock girl. Therefore my endpoint prompt will be this:
1girl, solo, full body, teen, straight-on, air guitar, airplane arms, contrapposto, dynamic pose, beautiful eyes, large eyes, blue_eyes, small iris, red eyeshadow, blue lipstick, long mohawk, black hair, {{empty hands}}, {{{White background}}}, {{{artist:slugbox}}}, artist:aleriia_v, {{artist:julie_bell}}, toon, large breast, absurdly narrow waist, {{{black shirt, punk rock t-shirt}}}, dark blue jeans, high heeled boots, chains, spikes, grunge, detailed, beautiful lighting, ambient occlusion, best quality, amazing quality, highres, absurdres, incredibly_absurdres

end prompt

It's important to note that you won't get EXACTLY what you want. It works kind of like AI does any other time, but with a restriction to sticking to the character that you've created. This can cause some things to not quite be what you want.
Sometimes you're going to want certain things to appear quicker in the transformation process. To do this, encase the tag in "{{{}}}". In my prompt you can see that I did this with the shirt. I usually do this with thingd that I suspect will be hard to change. Figuring out this balance takes practice.

Now we're getting started

  1. The first step is to generate an origin image that you like using your initial prompt.
  2. Once you have that then you're going to want to take your end prompt and replace your initial prompt with it. From this point on, don't change this prompt unless absolutely needed. You can but it's risky.
  3. Now take your origin image and load it into Image2image with these stats:
    Alt Tag

Why?

This is what we're using to guide our images forward. In order to ensure that there are changes taking place while making sure that it stays within the constraints of the character. Image2image is more restrictive than vibe transfer because the AI is trying to directly recreate what's there while modifying it by an amount that you control.

  1. With the same Origin image load it into vibe transfer with these stats:
    Alt Tag

Why?

This has a very similar role as image2image except it's not as restrictive. When we put it's stats as low as we have we get into the realm of only retaining the style with as little of the actual contents as possible. This is kind of an anchor to keep the style while allowing for the changes from image2image. In the image, you will notice that the reference strength is 0.24, but you should adjust this to your liking so that you get your desired result.

  1. Generate images until you get one that seems like the next step in the transformation process.

Things to look out for

You want to keep a few things in mind. You want to keep an eye on the lines of the image. Try to keep them as close to the same thickness as possible. As you continue the process there will be a little style drift if you aren't careful. This is also true about colors and clothing. Except with colors and clothing you aren't looking for an exact match, but instead you're looking for a place in-between the art in the image2image and the next step. If you get part of the way there and see a detail that once changed would make the image a good option then use inpainting to fix it and move on.

  1. Save that image. I suggest that you name it somthing to let you know what step in the process it's in. I usually keep the same name and put a number afterward to keep track of things.
  2. Increase the image2image strength by 0.05. This is a trick that allows the changes to ramp up.
  3. Put the image you just created into image2image. Do not change anything else.
  4. Go to 5 and go through the steps till you get back here. Keep doing this until image2image would go over 0.9.

And that's it. At the end you should have a series of images that depict the characters transformation into their alternative form. Which is what you see at the top of this page.

Additional stuff

Something that I've been toying with is starting with a vibe transfer reference strength of 0.15 and then adding 0.01 for every generation. Why? Because, unless you're very diligent, the possibility of style drift increases as time goes on. Increasing the reference strength has it's drawbacks, however. For one it can cause a restriction on what changes take place and how much is changed. Second it's tedious and boring. So it's up to you in the end.

I hope this was helpful!

Edit
Pub: 20 May 2024 14:00 UTC
Edit: 20 May 2024 19:02 UTC
Views: 545