RVC 101
guide by freesmart
note: VoiceChangerGuide is linked here! credit goes to Raven for writing!
contributors all welcome!
Learn about RVC, w-okada and more!
Table of contents
About RVC
RVC (Retrieval-based Voice Conversion) is an AI project that, as per the name, converts voices. RVC takes the part of:
- making your favorite AI covers you have heard!
- conserving people's voices to archive and pleasure forever!
- voice changing!
All within a single program!
Want to get started? Click the one you want to do.
RVC
Section 1
Get RVC
Before doing ANYTHING in here, you need to get an RVC installation.
Colabs:
These use Colab's pseudo-UI to work.
Hugging Face
These use CPU, so not very efficient. However, if you have a credit card you can duplicate one of these spaces and use a GPU!
Local
Download and extract the files. You can also use an installation script if provided.
Other ways
You can go to the AI Hub server and use the wordband bot or astra labs. These are NOT listed in the Inference section.
CUDA (for the NVIDIA users)
macOS
macOS does NOT have cuda.
(by the way, i've only tested it. contributors are welcome!)
CUDA is NVIDIA's own GPU toolkit. RVC has support for CUDA because it uses the "torch" library. If you don't have it, download CUDA from here and install it:
https://developer.nvidia.com/cuda-downloads
Then run the commands:
- for cuda 12.1
- for windows
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- for linux
pip3 install torch torchvision torchaudio
- for both (conda)
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
- for windows
- for cuda 11.8
- for windows and linux
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
or
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
- for windows and linux
When launching RVC next, you will have CUDA installed and you can use your NVIDIA GPU in RVC!
Models
Go to either of the sites to find a model:
Then download a model. Extract the zip, and put the files in their respective folders:
- .pth: (RVC Root)/weights
- .index: (RVC Root)/logs
Covering
You're now ready to make a cover, aka an inference!
Run the "go-web.bat" file or "go-applio.bat" file. Open the local link that gradio shows. On Colab, just follow the cells. Note that free users cannot have the UIs. For hugging face, just open the space.
Normal UI
If you're downloading a song directly, make sure you rip out the vocals and plug that in! Find that under the "For characters" section.
First, make sure you've downloaded a model. Select the model from the dropdown. If you were running the UI while downloading, refresh the model list.
Download acapella/vocals from youtube or somewhere else, and copy it's path. Put it in the audio path field, and press convert or infer. You can change the transpose and other in the rest of the panels, but we won't get into that.
EasyGUI
First, make sure you've downloaded a model. Select the model from the dropdown. If you were running the UI while downloading, refresh the model list.
Download acapella/vocals from youtube or somewhere else, and upload it! You can also create TTS audio, if you want. Then, you can change the pitch and convert it!
You can then download the output, and use it for whatever purpose!
POST IT PUBLICLY AT YOUR OWN RISK!
Training
You might need a new model because:
- It doesn't exist yet
- The model you want is just straight garbage
- Unspecified reason :troll:
Well you can train it! There's a 3 step process:
- Create a dataset
- Train off the dataset
- Publish
1. Creating (and uploading) a dataset
First you want to download some of the audio samples of the character/sound you want (e.g. controller noises)
If you're doing a character, you can rip out the audio!
Then cut the clips to the character talking, it's best to leave out silence and other noises. (Do not do this part if you're doing a sound)
Then you can export it as a wav file.
"Why wav files? Wouldn't mp3s work better?"
Kind of, but wav always does a feature extraction as far as I've tested.
For characters:
Maybe you accidentally slip in a sound or two, no worries! You can rip the audio out of the dataset!
You can use a site like vocal remover or UVR!
UVR
First get UVR from the website.
There's a UVR tab in my RVC installer. Won't you mention this?
no.
Then, download the audio you want.
Open UVR and then select the audio you downloaded for input. (1, top)
Then, find a folder you want to put the output audio in, and select it. (1, bottom)
Next, click on the wrench icon to the left of "Start Processing" and go to the "Download Center" tab. refrence from official ai hub docs
- To remove the instrumental and vocals download and use one of the following: (Process Method:MDX-Net) Kim Vocal 2, UVR-MDX-NET-Voc_HQ 1, UVR-MDX-NET-Voc_FT (Best for not much powerful GPU) and MDX23C-InstVoc HQ (Best one but needs a more powerful GPU than the other options)
- To remove harmonies from vocals use one of the following (put the vocals as input): MDX-net Karaoke 2 e VR-Arch 5_HP-Karaoke
- To remove reverb from vocals (a close echo) use one of the following (put the vocals as input): MDX-net Reverb HQ e VR-arc DeReverb / VR-arc DeEcho-DeReverb
- To separate 2 or more voices singing at the same time use one or more times (put the vocals as input): (Process Method: Vr Arch) 6_HP-Karaoke-UVR
refrence from official ai hub docs
Choose a process method, and a model. Start processing and go!
Vocal Remover
First, input the file. Then, turn up the vocals so you can hear everything. Output "Vocal" under save. You can use that as a dataset!
Ilaria Audio Analyzer
Ilaria Audio Analyzer is a tool that can tell you about the dataset you're using! Plug in your dataset, and see the specifics!
Dataset Maker
Refer below.
- https://colab.research.google.com/drive/1NA2GuJ2y-zRfoG3NearNiMCQa8NbidSh
- https://github.com/dubverse-ai/rvc-data-prep
Colab Users, do this!
Go to your google drive, and create a folder called "rvcDisconnected".
Then, find your dataset folder, and zip it up.
You should have a data structure just like this:
Upload your dataset to the folder you just created.
2. Train
Colab (RVC Disconnected)
Run the Dependencies cell. This is important to do as if not, you won't be able to train your model.
Then set the training variables as shown.
Color | Action |
---|---|
Green | Model Name |
Red | Leave Alone |
Blue | Optional to change |
![]() |
Then, in the next cell, you should put the name of your zip file you uploaded above and load the dataset.
Preprocess the data, and extract the features.
I'm using WAV files, but when i extract features, it says no-feature-todo! Why?
You might need to run it through cloudconvert.
This commonly happens with CapCut audio.
If you need to retrain your model, save the preprocessed dataset!
How do I reload it?
The next cell will do that!
Along with that, you have a model import cell.
Now you can run the Training cell!
After the program closes successfully, you can export your model from the notebook to your drive.
You can now train your index file!
I get an error saying "No features exist for this model yet. Did you run Feature Extraction?", what did I do wrong?
Either run Feature extraction (which you may have already did)
or see "I'm using WAV files, but when i extract features, it says no-feature-todo! Why?"
Local
WAIT!!!1!!!1!!!1!!1
You're gonna need a BEEFY GPU to use.
Enter the model's name in the respective cell, and leave the next 4 alone if you don't know what you're doing.
Then find your dataset folder, and copy it's path.
In step 2a, enter your path, and click the process data button.
In step 2b, extract the features.
Finally, in step 3, click train model. Once that's done, train feature index.
HuggingFace
you don't. EVER.
3. Publish
Make a sample with your newly trained model. Here's a table of samples:
(Create a sample by inferencing)
"Why did you leave out the instrumentals?"
AI Hub does not want you posting samples with instrumentals.
Zip the model's index and pth, you can find them here:
- .pth: (RVC Root)/weights
- .index: (RVC Root)/logs
Create a hugging face account with a model, and upload your model there.
Join the AI Hub server, then submit your model and sample in the #get-model-maker channel using the /submit command.
Model Makers:
Post your model and samples in the voice-models channel.
W-okada (Voice changer)
Section 2
Getting started
Here's the download page, neatly on HF!
You will also need a virtual microphone. Find some here. (this leads to VoiceChangerGuide)
The UI
The important things are marked with red, and/or numbers. If any, please refer below.
Number | Note |
---|---|
1 | CHANGE THIS TO GPU! |
2 | Put your MICROPHONE |
3 | Put your virtual microphone's input device. For example: CABLE Input (VB-Audio Virtual Cable) |
Once done, hit start!
Using the Voice Changer
Change the input microphone in your app to the virtual microphone. For example: CABLE Output (VB-Audio Virtual Cable)
guide by freesmart