The Definitive AI Cover Guide (For Local)
Before You think about starting, you must have the following: You will need Mangio RVC, TensorBoard, Ultimate Vocal Remover 5, Adobe Audition, and a really kickass GPU.
Step I: Installing Mangio RVC
Download and extract the latest version of Mangio RVC INFER TRAIN into a new folder. Make sure there are no spaces in the directory path!
(https://huggingface.co/MangioRVC/Mangio-RVC-Huggingface/tree/main)
The release ending in _INFER is for inference (inference means audio conversion with a voice) only and doesn't support training (making your own models); this is just so people can get the smaller download if they are going to be making RVC audio conversions, but have no desire to train their own voice models.
The release ending in _INFER_TRAIN includes both the training weights and is the full package for making audios and training voice models.
If you are confused by this explanation:
Download the _INFER.7z if you just no desire to train your own voice models, or you have no NVIDIA GPU capable of doing so.
Download the _INFER_TRAIN.7z if you want the full package capable of creating new voice models.

Step II: Installing TensorBoard
Make sure you have Python 3.10.11 installed. Once installed, relaunch the python-setup and click on 'Modify'. In the "Advanced Options", and then next, and then check "Add Python to environment variables" (aka PATH) (This is the easiest way to add python to environment variables for new people)
(https://www.python.org/downloads/release/python-31011/)
Once that is done put both of these files in your Mangio RVC folder:
https://files.catbox.moe/eyqisx.bat (Rename to TBI.bat)
https://files.catbox.moe/epiuep.bat (Rename to tbL.bat)
Step III: Installing Ultimate Vocal Remover
Download the latest version of UVR.
(https://ultimatevocalremover.com/)
Once installed, run UVR and click on the wrench icon and navigate to "Download Center". You will be downloading AI models for the purpose of isolating and/or removing vocals, instrumentals, reverbs, and harmonies from soundtracks. This will take a while to download everything.
Click on the "VR Arch" radio button. Select "VR Arc Single Model V5: 7_HP2 UVR" from the dropdown list and click the download button. Once you finished downloading, repeat this same process to download "5_HP-Karaoke-UVR" AND "6_HP-Karaoke-UVR".
Click on the "MDX-Net" radio button. Select "MDX-Net: UVR-MDX-NET Inst 3" from the dropdown list and click the download button. Once you finished downloading, repeat this same process to download "UVR-MDX-NET Inst Main" AND "UVR-MDX-NET Main" AND "UVR-MDX-NET Karaoke 2" AND "Reverb HQ" AND "UVR-MDX-NET Inst HQ 1" AND Kim Vocal 1.
Click on the "Demucs" radio button. Select "Demucs v4: htdemucs_ft" from the dropdown list and click the download button.
Step IV: Installing Adobe Audition
Download Audition for vocal segmenting and making noise gates. You will also need it to combine vocals and instrumentals. I hear that Audacity works as well, but I've never used it.
(https://1337x.to/torrent/4358437/Adobe-Audition-2020-v13-0-4-39-x64-Multilingual-Pre-Activated-FileCR/)
Step V: Isolate Voice Audio
Now that you have all the prerequisites installed, you need to create a voice model. Find a Vtuber's stream that doesn't have too much background noise. A zatsudon stream is ideal for this. You don't need too much audio, but the more you have, the better. When I train models, I like to have about 30 minutes of voice audio for training.
Once you download the stream audio, you need to open Ultimate Vocal Recorder. Select the stream audio as the input and choose a place where you want the output to go. What you want to do is remove the BGM and isolate ONLY the audio of the Vtuber speaking. You do NOT want to have any other noise whatsoever. This is very important to have the dialog as clean as possible!
Choose a process method and a corresponding model that works well for you. Each model is different and your mileage may vary with them. I find Inst HQ 1 and Kim Vocal 2 to be very good at extracting voice audio, but you may find a different one to fit your taste. Remember, your goal is to isolate the voice audio as cleanly as possible. Play around with different models and process methods until you feel satisfied with the result. You can also go back to the download center to get more models, too.
Make sure that either the WAV or FLAC radio buttons are selected. Make sure that "GPU Conversion" and "Vocals only" checked. Things like "Window size", "Aggression setting", "Batch size", "Volume compensation", "Choose stems" and "Segment" do not need to be touched. However, if you are using Ensemble Mode, Be sure to select "Vocals/Instrumental" underneath "Main stem pair" AND "Max spec/max spec" underneath "Ensemble algorithm".
You can have a huge files as the only file in your dataset and RVC will chop it for you properly when following this guide’s instructions, from my test. RVC chops into ~4s bits, so make sure your samples are at least 4s long for consistency reasons (or merge the shorter samples into one long file). If you wanna stay on the safe side, you can split into 1 minute intervals
Noise Gating
Now that you have your voice audio processed by UVR, you may still want to conduct more cleaning. This is where Audition comes in. What you want to do here is create a noise gate. This will erase any sound that is under a certain decibel limit.
Open Adobe Audition and import your audio file that was outputted by UVR earlier. Now select the entirety of the audio, go to "Window > Amplitude statistics". Now, a new amplitude statistics window should open up on the left side. Expand the window such that everything is visible and click on the "Scan selection" button. Take note of the "Average RMS Amplitude" values.
Now, navigate to "Effects > Amplitude and Compression > Dynamics". In the new window that just opened up, click on the "Autogate" checkmark box. The "Threshold" value should be about the same as the average RMS amplitude values. In the image above, left is -24.97 and right is -25.78. Thus, I have set the threshold value to -25.
Now, play your audio to make sure that it sounds good. If so, apply the changes and export the audio as a WAV file. If you don't like how it sounds, play around with the threshold value more.
Step VI: Creating a Model
Now that you have your perfected audio sample for training, you are ready to actually create your model.
Go to your Mangio-RVC folder and run "go-web.bat" to open the GUI web interface. Click on the "Train" tab when the GUI opens.
Enter the experiment name:
This is the name for your model. Name if after the Vtuber whom you're creating a model of.
Target sample rate:
This is the target sample rate of the audio file you exported from Adobe Audition. Typically 48k.
Whether the model has pitch guidance:
Keep this checked.
Version
V2 always.
Number of CPU processes used for pitch extraction and data processing:
If you own an old CPU or a low-end CPU, you MUST turn this value down. Else, your computer will absolutely blue screen and crash.
Enter the path of the training folder:
This is the path to wherever you specified Adobe Audition to export your WAV file. Make sure there are no spaces in the path name.
Please specify the speaker/singer ID:
N/A
Ensure that everything is set correctly and click "Process data". Wait until the command prompt window tells you that it has completed.
Enter the GPU index(es) separated by '-', e.g., 0-1-2 to use GPU 0, 1, and 2:
This is where you select which GPUs you will be using. I only have one GPU, so I would enter 0. If you happen to have two GPUs, enter 0-1. If you have three, enter 0-1-2. Remember that in computer science, everything starts at zero.
Select the pitch extraction algorithm:
Mangio-crepe and rmvpe are the best option for training datasets.
IF YOUR DATASET QUALITY IS GREAT, MOSTLY FREE OF NOISE: Use Mangio-Crepe, but rmvpe training is new and seems promising based on early tests! EDIT
IF YOUR DATASET QUALITY ISN’T GREAT: Use rmvpe, or harvest if that somehow has problems.
Mangio-Crepe Hop Length:
Lower hop lengths will be more pitch precise and therefore take longer to train. Your dataset should be set up so that it is free of any major noise if you go for lower hop sizes, because it increases the risk of bad data in your dataset being focused on when you have higher pitch accuracy. I personally choose 64. This value should always be a power of 2.
Ensure that everything is set correctly and click "Feature extraction". Wait until the command prompt window tells you that it has completed.
Save frequency (save_every_epoch):
If enabled, the option saves the model as a small .pth file in the /weights/ folder of your Mangio-RVC directory for each save frequency. To get an accurate (early) preview, generate the feature index before training; of course you must make sure you followed the first two steps (data processing + feature extraction) before training the index. You could also generate the feature index afterwards if you forgot to do so. Having this option on enables you to test the model at each epoch iteration if necessary, or use an earlier iteration if you overtrained too far. Another fantastic thing is that if (when) your computer crashes during training, you don't really lose much. I set this to 5 for that reason.
Total training epochs (total_epoch):
Set a decent amount of epochs to cover yourself in case of overtraining (400-600). Less epochs generally means the model will be less accurate, rather than necessarily ‘sounding worse’. However, if your dataset isn’t so high quality or lacks a lot of data, you might wanna experiment later on and see which saved epoch model is the best balance between accuracy and sounding good. In some rarer cases, less epochs might sound better to your ears. It’s trial and error for making a good model at this phase. If you wanna stay on the safe side, I would go for a ‘slightly under trained’ model.
Batch size per GPU:
Batch size is how much data it processes at a time (speed option, not a quality thing). This depends on GPU VRAM. So for a RTX 2070 for example, with 8GB VRAM, you use batch size 7. Always go a bit low with batch size to prevent a system crash.
Whether to save only the latest .ckpt file to save hard drive space
Always yes.
Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement
Self-explanatory.
Save a small final model to the 'weights' folder at each save point
Always yes.
Load pre-trained base model G path:
Do not touch.
Load pre-trained base model D path:
Do not touch.
Ensure that everything is set correctly and click "Train feature index". Wait until the command prompt window tells you that it has completed.
Ensure that everything is set correctly and click "Train Model". This WILL take a LONG time. Ensure that you will not have to do anything strenuous on your PC for the next several hours. Model training for me took about 5 hours for 320 epochs. Obviously this will vary based on your system specs, but it will be time consuming regardless. Monitor your PC regularly to check for system crashes.
TensorBoard (Important!)
You WILL need to regularly check TensorBoard to monitor your training progress. Use the TensorBoard logs to identify when the model starts overtraining.
In step II, you should have put TBI.bat and tbL.bat in your Mangio-RVC directory. Run "TBI.bat" and "tbL.bat". It will open up a command prompt window and give you a URL to access the GUI web interface.
If you go to that URL and see a screen that says "No scalar data was found", that means there was an error. In this case, open up a new command prompt terminal as an admin, and enter one of the following commands:
pip install --upgrade "protobuf<=3.20.1"
pip install 'protobuf<=3.20.1' --force-reinstall
Now, re-run "tbL.bat".
Once you get into the TensorBoard GUI, Click the scalars tab, and search for "g/total" on top
Once you find the ideal step count, do basic math to figure out the ideal epoch count. For example, let's say 10k steps is the point where overtraining starts. Let's say you overtrained to 20k steps, and your model is at 600 epochs currently. Since 600 epochs is 20k steps, that means, 10k/20k = 50%. 50% of 600 = is ~300 epochs, roughly; so that is the ideal epoch value in that scenario. Alternatively, you can find the timestamp of the best value in TensorBoard, and then check the train.log file for the model in the /logs/ folder for the matching timestamp to find exactly which epoch. You could also check the timestamps on the command prompt window as well. If you see overtraining begin and you’re confident of it, hit the "stop training" button on the Mangio-RVC web GUI.
Remember to hit refresh to update it when needed and fit the data to the graph.
Recovering From a Crash
In the event of a system crash, you must run the installer steps again, and then proceed as normal, launching your GUI server again.
During a retrain, to continue where you left off, use the same exact name (with the same capitalization) and sample rate (default is 40kHz if unchanged). Use the same settings that you had before for batch size, version, etc… make them match.
Do not re-process the files and do not redo feature extract again. Basically, avoid pressing "process data" or performing "feature extraction" again, because you don’t want it to redo the pitch analysis that it already did.
Only keep the two latest .pth files in the /logs/ folder for the model, based on their date modified. If there is a "G_23333" and “D_23333” file in your model’s logs folder, it represents the latest checkpoint, if you ticked ‘Save only latest ckpt’ (which I recommend doing earlier in this guide already). If that wasn’t on, for some reason, remove all .pth files in the folder that aren’t the latest, to avoid inaccuracies.
Now you can start training again by pressing 'train model', with the same batch size and settings as before. If the training starts from the beginning again (at epoch 1 rather than your last saved epoch before the training stopped), immediately use CTRL+C on the command prompt window to kill the GUI server, and try starting the GUI again.
Step VII: Fixing a Song
Congratulations! You completed your very first model, but you're not done yet. Now, you need to fix a song you want your AI model to sing. I recommend "That Poppy - Lowlife (Acoustic)" for beginners because it's super easy.
What you want to do is extract the vocals and the instrumental, such that you have two files - One file of instrumental ONLY and another of vocals ONLY.
If you recall what we did in step V, it's very similar.
Once you have your song file ready - preferably as a FLAC or WAV - you need to select it as input in Universal Vocal Remover and specify an output location. I recommend creating a designated folder for this.
It is important to note once again that when using UVR, you must play around with different models and process methods until you feel satisfied with the result. Every song is different and models that work well for one song may not necessarily work well for another song. Your end goal is for each of your two audio files to be as clean as possible. If you bullshit during this step, your cover song will sound terrible.
Ensure that the vocal track does not have any interference, as some models will distort the voice. You also want to make sure that there is no sound of instrumental in your vocal soundtrack.
Ensure that the instrumental track does not have any interference. For some songs, it is very difficult to remove reverb and background harmonies, but try as much as you can. Ultimately, there should not be any vocals in your instrumental track. Do some good ol' fashioned manual editing in Audition if UVR isn't doing the trick - Audition by itself is incredibly powerful.
Kim vocal 1 or Inst HQ 1 is the best ‘general’ vocal model, Kim vocal 2 will sometimes isolate non-vocals but it can sound better overall (you can run it and then a karaoke model after it to deal with this.. sometimes).
It is necessary to remove reverb / echo from the dataset for the best results. Ideally you have as little there as possible in the first place, and isolating reverb can obviously reduce the quality of the vocal. But if you need to do this, under MDX-Net you can find Reverb HQ, which will export the reverb-less audio as the ‘No Other’ option. Oftentimes, this isn’t enough. If that did nothing, (or just didn't do enough), you can try to process the vocal output through the VR Architecture models in UVR to remove echo and reverb that remains using De-Echo-DeReverb. If that still wasn't enough, somehow, you can use the De-Echo normal model on the output, which is the most aggressive echo removal model of them all.
In some cases, harmonies are too hard to isolate in UVR without it sounding poor quality. But if you want to try anyways, the best UVR models for doing so would be 5HP Karaoke (VR Architecture model) or Karaoke 2 (MDX-Net). 6HP is supposed (?) to be a more aggressive 5HP I think? Dunno. YMMV so try out the other karaoke options unless it literally just isn't working no matter what.
A very powerful but often overlooked feature in UVR is "Ensemble mode", allowing you to mix different models together. I've found UVR-MDX-NET Inst 3, UVR-MDX-NET Inst Main, and UVR-MDX-NET Main to be very good at isolating instrumentals only. Select those three models and be sure to select "Vocals/Instrumental" underneath "Main stem pair" AND "Max spec/max spec" underneath "Ensemble algorithm". The output will include a sound track for each model by itself, as well as a soundtrack for the models mixed together.
Unfortunately, Ensemble mode isn't quite as good at isolating vocals. As you work with this, you'll often need to use one model to isolate vocals, and a separate model to isolate instrumentals. I do not recommend using the same model for both without testing it first.
Step VIII: Creating Your AI Cover
Now that you finally have your completed model and both your song vocals and instrumentals separated, you're ready to finally create your AI Cover which is actually the easiest step so far. Go back to the Mangio RVC web GUI and click on the first tab, "Model inference".

The "Inferencing voice" is your model's .pth file, which is located in the /weights/ folder of the Mangio RVC directory. Refresh the voice list if the file isn't being displayed in the dropdown. Then click on "Unload voice to save GPU memory".

Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):
-12 or 12 will both stay in the original key but at different octaves. Good for extreme differences, like a male singer doing a song sang by a female. For more subtle differences, -5 and -7 are the least dissonant settings, as they represent perfect fourth and fifths respectively, but they may still feel ‘off’.
Add audio's name to the path to the audio file to be processed
This is the path to the audio only file that you want to use. (The one you got from UVR and maybe edited in Audition)
Select the pitch extraction algorithm:
The best option here is usually rmvpe . However, mangio-crepe with a hop size of 64 or 128 is also acceptable. The other ones are all garbage. Only rmvpe and mangio-crepe are acceptable.
The Mangio Crepe hop length controls how often it checks for pitch changes in milliseconds when using crepe specifically. The higher the value, the faster the conversion and less risk of voice cracks, but there is less pitch accuracy. The default value is 128, so that means it checks roughly 8 times a second for pitch changes. Anything lower than 64 is almost always pointless from my tests. Pick powers of two for this value.
If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.
Higher = more ‘blurred’, or smoothed out outputs. Might help slight cracking issues, but potentially makes the pronunciation worse. Play around with this value until you settle on a value that you personally like.
Path to your added.index file
Self explanatory. This is the path to your model's index file in the /logs/[MODEL NAME] folder of the Mangio RVC directory.
Search feature ratio:
This value controls how much influence the .index file has on the voice model’s output. (The index controls mainly the ‘accent’ for the model.)
If your model’s dataset isn’t very long or it’s not very high quality, (or both), this should be lower. If it’s a high quality model, you can afford to go a bit higher. Generally, my recommended value would be 0.6-0.75, and reduce it if you think it’s truly necessary.
Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:
Do you want to resample the audio? Probably not, but the option is here anyway.
Use the volume envelope of the input to replace or mix with the volume envelope of the output.
The lower you set this to, the more it will capture the original volume range of the original song. A value of 1.0 will be equally loud throughout the whole conversion; 0 will make it mimic the volume range of the original as much as possible. I would recommend you set this volume setting to a decently low value such as 0.25 or 0.2.
Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music.
The default is fine, but play around with this value until you settle on a value that you personally like.
EXPERIMENTAL Formant shift inference audio:
This is experimental, so I have limited experience with this. If you have male to female and vice-versa conversions, try it out and see if it's good.
F0 Curve File
I've never used this.
Ensure that everything is set correctly and click "Convert". Wait until the command prompt window tells you that it has completed. The first time you run this, it may take a while because the model is being loaded. It will execute much faster on consecutive conversions (within seconds on a decent GPU).
Again, ensure that you play around with the above options and values. Only you know what works best for your project. It's a matter of trial and error.
If you have long periods of silence in your vocal soundtrack, removing them in Audition and converting them separately based on vocal pieces and then manually merging them later can help improve the output result, but it can be somewhat subtle of an improvement (depends on amount of silence)

Once you convert, output information will be displayed on the left, and your completed WAV file will be available on the right. Download and listen to it and see how it sounds. if you don't like it, mess with some of the values and re-convert until you do.
Finishing your AI song cover
Now that you have created your best AI vocal only soundtrack for the song, you can merge it with your instrumental soundtrack from UVR.

Import your two soundtracks into audition - Your AI vocal only file from RVC, and your instrumental only file from UVR. To display them as in the image above, click on "Multitrack".
Before you export your AI cover song, now is the time to let your creative energy flow. Add reverb, effects, harmony, et cetera as you see fit. As stated earlier, Adobe Audition is incredibly powerful software used by industry professionals. Do whatever YOU want to do to make your finalized song as kino as possible. A detailed guide on how to use Audition is out of scope for this guide. However, you can find a plethora of information online dedicated to teaching the ins and outs of the software.
Good tutorials to get started:
https://youtu.be/UMrRqXCqP_k
https://youtu.be/58QRedZRA2A
UPDATE 7 Oct 2023

This is important information about how to interpret the TensorBoard graphs:
Smoothed: This represents the smoothed version of your data. TensorBoard uses a method called Exponential Moving Average (EMA) to smooth the data and make the trends more apparent. The smoothing is controlled by a weight parameter that ranges between 0 and 11. The smoothed value is calculated as a weighted average of the current value and the previous smoothed value1.
Value: This is the actual value of your metric at a given step. In our case, it’s the total loss for the generator ‘g’ at a specific training step.
Step: This represents the particular iteration in the training process and is reflected in the command prompt.
The “loss/g/total” graph quantifies how well the generator ‘g’ is performing. This is usually calculated as a weighted sum of various individual losses during each iteration. The objective of the training process is to minimize this loss function
In the context of the "loss/g/total" graph, the "value" is the loss value. The goal of training is to minimize this loss value. A lower loss value indicates that the model’s predictions are close to the actual values, which means the model has learned the underlying patterns in your data effectively.
So, when you’re choosing a final model to use after training, it’s ideal to choose the model state (or checkpoint) that had the lowest validation loss during training. This is because this state of the model had the best performance on the validation set, and is therefore expected to perform well on unseen data too.
The “loss/d/total” graph quantifies how well the discriminator ‘d’ is performing. This is usually calculated as a weighted sum of various individual losses during each iteration
In the context of Generative Adversarial Networks (GANs), both the generator (g) and the discriminator (d) have their own loss functions that we try to minimize during training.
Generator (g): The generator’s loss (loss/g/total) quantifies how well it’s able to fool the discriminator into thinking the generated data is real. A lower generator loss means the generator is getting better at producing realistic data. If the generator loss (loss/g/total) is low, it means the generator is doing a good job at creating realistic data that can fool the discriminator.
Discriminator (d): The discriminator’s loss (loss/d/total) quantifies how well it’s able to distinguish real data from fake data. A lower discriminator loss means the discriminator is getting better at telling real and fake data apart. If the discriminator loss (loss/d/total) is high, it means the discriminator is having a hard time distinguishing between real and fake data, which could imply that the generator is “winning” the adversarial game.
The goal is to find a balance where both the generator and the discriminator improve together. If the discriminator loss becomes too high, it might mean that the discriminator is not learning effectively. This could lead to issues such as mode collapse, where the generator starts producing limited varieties of samples. Both the generator and discriminator should be improving over time for the GAN to learn effectively.