AI MMD
wip
Multi-frame rendering
https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion
https://github.com/OedoSoldier/sd-webui-image-sequence-toolkit
By Chinese guy. Probably what everyone currently doing ai mmd uses. Tons of related videos, tutorials included https://www.bilibili.com/video/BV1R54y1M7u5/
I still struggle with getting results as good as these.
https://www.bilibili.com/video/BV1cX4y1z7Cb/
Workflow:
- Use PR to automatically reconstruct the video as a vertical screen, then use ffmpeg to convert the video to a picture sequence with a frame rate of 18
- Use Grounding DINO + Segment Anything (https://github.com/continue-revolution/sd-webui-segment-anything) to cut out the main character. It was found that using girl as prompt for segmentation would occasionally cause the twin tails to unlock during large movements, so twin tails was run again, and then the masks were merged.
- Use WD 1.4 tagger (https://github.com/toriato/stable-diffusion-webui-wd14-tagger) to extract prompt words for each frame (threshold 0.65), and then use the dataset tag editor (https://github.com/) toshiaki1729/stable-diffusion-webui-dataset-tag-editor) for batch editing, with the main editing points being
- Misidentified thighhighs, black thighhighs, corrected to thigh boots, black thigh boots
- Remove panties, underwear, pantyshot and other words
- Add smile, otherwise an expressionless look
- Add black background, simple background
4, updated multi-frame rendering plug-in (https://github.com/OedoSoldier/sd-webui-image-sequence-toolkit), now multi-frame rendering and the previous enhanced img2img are supported by ControlNet 1.1 inpaint model
- Specific parameters:
- Model: Aniflatmix (https://civitai.com/models/24387, https://huggingface.co/OedoSoldier/aniflatmix)
- Default cues: masterpiece, best quality, anime screencap
- Negative embed used: EasyNegative (https://civitai.com/models/7808), badhandv4 (https://civitai.com/models/16993), verybadimagenegative_v1.3 (https://civitai.com/models/7808) https://civitai.com/models/11772)
- Generated resolution: 768 * 1360
- CFG Scale: 4
- Redraw Scale: 0.75
- ControlNet: Enable pixel perfect and no prompt mode for all modules, with a weight of 1 for each. Set the image scaling mode to "resize only (stretch)"
- inpaint (preprocessor: inpaint global harmonious)
- ip2p (preprocessor: none)
- shuffle (preprocessor: none)
- lineart anime (preprocessor: lineart anime)
- softedge (preprocessor: softedge pininet)
- Multi-frame rendering initial denoising strength is set to 0.75, ControlNet inpaint is enabled, and prompt words are read from the file, other parameters are default
6、Fix some of the painting errors, use PR to composite video
Incidentally, I started to run last night before I went to bed, but I didn't know what the reason was, so I got up in the morning and realized that I had only run two hundred frames, otherwise I would have finished it long ago. It took about eight hours to run more than seven hundred frames.
https://www.bilibili.com/video/BV1tX4y1m7pK/
output the picture at 20 frames, and post-pr supplementing frames to 60 frames, Multi-frame rendering without Lora Model: meinamix_mein aV9, ControlNet-0 Module: normal_bae ControlNet-1 Module: lineart_anime ControlNet-2 Module: canny
https://www.bilibili.com/video/BV1sk4y1Y7hw/
Model: cetusMix_cetusVersion3 Script: Multi Frame Rendering Plug-in: controlnet- hed
https://www.bilibili.com/video/BV118411o7nv/
Seems like this goes through a bunch of different steps, maybe worth trying
(Accurate background removal plugin) PBRemTools installation address: https://gitcode.net/ranting8323/PBRemTools.git
The plugin installation path: stable-diffusion-webui/extensions/PBRemTools/models
Adobe PR download: https://t.1yb.co/IpAJ
isnet pro (@star pupil poison only integrated version) https://github.com/ClockZinc/sd-webui-IS-NET-pro.git
cutoff https://gitcode.net/ranting8323/sd-webui-cutoff.git
adetailer https://github.com/Bing-su/adetailer.git
Multi-frame rendering plugin: https://github.com/OedoSoldier/sd-webui-image-sequence-toolkit
https://www.bilibili.com/video/BV1RV4y127WC/
Only lineart and tile are used
https://www.bilibili.com/video/BV13k4y1J7Pd/
kinda similar to the one vid before the last one
controlnet model: softedge and depth
isnetpro plugin: github.com/ClockZinc/sd-webui-IS-NET-pro
MeinaMix model: civitai.com/models/7240/meinamix
https://www.bilibili.com/video/BV1fh411F7is
Another tutorial video
tutorial on youtube on multiple methods (maybe duplicate)
The first two chapters introduce the old methods, which can be consumed starting from Chapter 3.
Chapter 1.
Original video from BV1mY411Y7Fb
Test method:
- 1、prompt generated by wd v1.4 swinv2 tagger v2
- 2, mask generated by theclipseg
- 3, ControlNet open canny and normal
Test results: 1:
- 1、Total 384 frames (12fps), rendering time 23 minutes 47 seconds
- 2、Because of the addition of ControlNet, the accuracy of the transfer drawing is greatly improved.
- 3、WD1.4 tagger identification accuracy is higher than deep danbooru
- 4、2+3 two points make the expression and body effect significantly improved without artificial correction
- 5、The Deflicker used in the production process of "Rock Paper Scissors" is not effective at low frame rate.
Test conclusion:
AI animation still needs to wait for new technology
Chapter 2.
Tool address: https://github.com/OedoSoldier/enhanced-img2img
For AI three to two, I recommend the frame rate is 12FPS. after all the processing is completed, you can directly import the picture sequence into PR and other video editing software, note the set frame rate
Chapter 3.
The original video: BV1C54y1N7h6
The second half is a comparison, and the same parameters with the same seed also use the same ControlNet control of the default method compared to the overall flicker down (the background is particularly obvious)
But the cost is a longer generation time: 1:32:52 vs 27:49, exactly 3 times longer. I will make a separate video after the specific principle
Plugin: https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion
Chapter 4.
Noise reduction intensity 0.8, mainly testing the effect of medium and long distance, ControlNet control degree is lower (0.6). There will still be small defects when the limbs cross
- Slightly modified the code for multi-frame rendering, a tutorial will be issued
- Model: A-SOUL
- Actions: hino
Chapter 5.
My previous plugin on GitHub also updated this feature: https://github.com/OedoSoldier/enhanced-img2img
Surprisingly, my workflow to change character into anime style works very well vice versa. I only changed checkpoint to realistic and made few adjustments to prompts. I tried to add lora, to make her face look more like Tifa from FF7Remake, but it only made her look more artificial, so I dropped this idea.
Tools: A1111, Img2Img, ControlNet, Davinci Resolve, Topaz Video Ai
Model: Realisian
Positive prompt: (masterpiece), (best quality:1.3), 8k wallpaper, smooth gradients, soft shadows, 1girl, clean background, detailed face, beautiful brown eyes, smile, red boots, (realistic:1.5)
Negative prompt: (worst quality, low quality, normal quality:1.8), nude, nsfw, [add your favourite negative prompts]
Settings: Steps: 20, Sampler: Euler a, CFG scale: 7, Denoising strength: 0.4, Clip skip: 2, ControlNet: softedge_hed weight: 1.5, My prompt is more important, [Loopback]; Temporalnet, preprocesor: none, weight: 0.7, My prompt is more important, [Loopback]
Workflow: First resize and crop the video to desired size, keep in mind the SD works better with square images. Next extract video to images, 30fps is good for dynamic videos.
After that I took one frame and began prototyping prompts and settings in Img2Img, here are some useful observations: Generally, with Img2Img, higher denoising means more changes and more flickering. To reduce it, balance denoising and CFG scale. Sampler and ControlNet also matters, I got best results with Euler a and Softedge_head.
When I achieve style I'm looking for, it's time for batch generation. This is long process and will take few hours. You can sneak peak if generation is going well and if something went wrong interrupt generation.
After generation complete, it's time to render the video. I won't go into the details how to use video editing software, but I can give some important notes: Use deflikdering, even few times if needed. For low fps image sequences (under 30fps) use frame interpolation methods like optical flow.
The last step is enhancing video in Topaz Video Ai. This is very simple program to use, just change framerate to 60 and upscale to desired resolution.
Source video: https://www.youtube.com/shorts/LT3nauFn1NY The original video seems to be animation (3D?) transformed into anime style using AI, but I don't know how it was made.
Adetailer use
Tools: A1111, Img2Img, ControlNet, ADetailer, Davinci Resolve, Topaz Video Ai
Model: Dark Sushi mix
Positive prompt: (masterpiece), (best quality:1.3), 8k wallpaper, smooth gradients, soft shadows, 1girl, clean background, detailed face, realistic eyes, smile
Negative prompt: (worst quality, low quality, normal quality:1.8), nude, nsfw, [add your favorite negative prompts]
Settings: Steps: 20, Sampler: Euler a, CFG scale: 7, Denoising strength: 0.35, Clip skip: 2, ADetailer: model: face_yolov8n.pt, denoising strength: 0.3, inpaint only masked: True, ControlNet: softedge_hed weight: 1.5, My prompt is more important, [Loopback]; Temporalnet, preprocesor: none, weight: 0.7, My prompt is more important, [Loopback]
Workflow: First resize and crop the video to desired size, keep in mind the SD works better with square images. Next extract video to images, 30fps is good for dynamic videos.
After that I took one frame and began prototyping prompts and settings in Img2Img, here are some useful observations: Generally, with Img2Img, higher denoising means more changes and more flickering. To reduce it, balance denoising and CFG scale. Sampler and ControlNet also matters, I got best results with Euler a and Softedge_head or TilE. ADetailer is great for improving faces.
When I achieve style I'm looking for, it's time for batch generation. This is long process and will take few hours. You can sneak peak if generation is going well and if something went wrong interrupt generation.
After generation complete, it's time to render the video. I won't go into the details how to use video editing software, but I can give some important notes: Use deflikdering, even few times if needed. Crop the video to desired proportions. Recolor if needed.
The last step is enhancing video in Topaz Video Ai. This is very simple program to use, just change framerate to 60 and upscale to desired resolution. Alternatively you can use free software like flowframes.
Source video: https://www.douyin.com/video/7212553852455882040
https://www.reddit.com/r/StableDiffusion/comments/16cktll/ai_anime_with_temporalnet2/
controlnet ip2p temporalnet2 lineart