EOL - No further Updates
Github - Blanc-dot
Discord User ID - https://discord.com/users/824922747423031359
Despite being end of life, most if not all information has not really changed, so should be very accurate until actual new stuff comes out.

Installing Mainline RVC

The quick and dirty explanation is as follows

  • First Navigate to their official releases page
    • You will now see "complete packages" without needing to even scroll and click based on the GPU you have. It should immediately start to download.
  • If you would like my Custom UI which removes a ton of useless features (this will still let you use the normal one if you need it) and sets better defaults go to my Github Repo press the blue code button and download as zip, or go to releases and download for your current RVC (if more come out)
    • alternatively you can look at this RENTRY and make your own custom UI, it'll tell you line numbers and what to change, but it won't be updated past the 1006 version.
  • You will now need to wait for the official RVC to finish downloading, and you should extract it IDEALLY to your desktop for the least issues.
    • after it has been extracted if you downloaded my rvc edits, you can unzip it into the root of the rvc folder, aka where you see all the other files.
      • if also using mine, make a folder inside of the root of RVC, called datasets, this is where you'll just dump your audio files for the current model you are going to be training, it auto defaults to this, so you dont have to manually put the path each time.

To launch the official UI you run go-web.bat or if you are using my edit you run RavenUI.bat or EnropUI.bat (depending on version) file extensions may be hidden so look to the right of the file and if it says Windows Batch File then that's the one you want to be running.

Alternative way to installing Mainline RVC

Grab the bat file(s)
Read through the readme file
Place bat file either desktop or in its own folder and run, and let the magic happen where it fully installs modified mainline RVC, alternatively edit the file, and replace the REPO_URL with the official repo

Model Training

This will cover the following

  • Datasetting
    • Gathering
    • Processing
  • Training
    • Local
    • Colab

Gathering Your Dataset

Datasets are just Audio Files that has been processed to remove things that you do not want in it.
https://stacher.io/ easy way to get datasets
Isolates that dataset into your Videos folder, and creates a new folder so you can stay organized.

8-15minutes is usually more than enough data after it has been processed, or 20-30 minutes prior to anything being done.

Processing Your Dataset

  • For Beginners
    • We will be using UVR or any website that can get you just the voices from your dataset, if you don't have any external noises you can skip this part
    • In UVR or whatever site, try using MDX23C if not use Voc FT, this will, split your audio to have little to no bg noise.
    • assuming you dont have a DAW, or RX pro, just use Audacity, it'll work, you should still grab some plugins though. TDR Nova, Kilohearts Dynamics, Bertom Denoiser, T-De-Esser, and any noise suppression plugin you can find thats free, if not just use Audacity's generate silence.
    • in audacity assuming you found noise suppresion of some sort run it, next use kilohearts dynamics to compress at a 2.0:1 ratio from -22db, and a 1:2.0 at around -50db (values aren't exact its just reference numbers its to compress the peaks so its more inline, and cut down whatever bg noise a bit more, now you want to normalize to -5db bc rvc will normalize audio so might as well raise it up to remove even more noise yourself.
    • Now that you have normalized, run your noise suppression again if theres bg noise still. You can now run your de-esser if you think the s sounds are too harsh, just use one of the presets it has. Now you want to slowly sweep through the audio and generate silence on gaps that dont include any talking, or just nonsense noises, it doesnt need to be perfect, if you have a de-reverb plugin and there is reverb, here is where you'd do it as well.
    • You can now truncate silence -60db for threshhold, 0.050 det, 0.125 duration, OR instead of truncate, you can use label sounds, found in the analyze tab, threshhold -42.5db, silence dur 00125s, label interval 01000s, leading 00035s, trailing 00090s.
    • if you used truncate silence just export to dataset like you normally would, 1 large file.
    • if you used analyze files : export, do export multiple files, labels. and put it in your dataset folder, creates many many files, sometimes thousands.
      • This is not a be all end all, as there will always something different that you will have to do, this just gives you a good starting point. If you want to get better results I suggest getting a proper DAW, and learning how to edit audio inside of it. UVR + Audacity using free plugins is just a way for making models that are decent at best.

If you are using Audacity get VSTs, otherwise you are not going to have a fun experience as the default ones are really not that good. That is why I have recommended some already, but there are plenty out there that you can look into that can do far more, and if you have a good plugin set, UVR is shit in comparison. The only reason you want to use UVR is to separate music from words, and that's it.

  • For More Advanced users
    • I'm not going to fully guide you since you should be able to figure out most of whats listed here on your own, but you will have a really good starting point, with 80% of your work done from just following the process order.
    • You can use Demucs CLI, or UVR's MDX23C if you still desire to do so. RX 10's Music Rebalance works too, but the other options are better.
    • RX Pro or DAW of choice and process, here's an example order: de-phase (adaptive phase rotation) -> initial de-noise (typically spectral denoise) -> compress to avg all out and get rid of peaks (2:1 ratio works fine) -> norm to -3dB -> final de-noise (once again spectral denoise, you target another silence area, and it should remove the last of the noise you dont really want in your models) -> and then you can process however you need, dereverb, de-ess, de-click etc.
    • You can now truncate silence -60db for threshhold, 0.050 det, 0.125 duration, OR instead of truncate, you can use label sounds, found in the analyze tab, threshhold -42.5db, silence dur 00125s, label interval 01000s, leading 00035s, trailing 00090s.
    • if you used truncate silence just export to dataset like you normally would, 1 large file.
    • if you used analyze files : export, do export multiple files, labels. and put it in your dataset folder, creates many many files, sometimes thousands if your dataset is extremely long.
      • This is not a be all end all, as there will always something different that you will have to do, this just gives you a good starting point. Overtime you'll learn almost purely from visuals what to do, but always remember to listen to your dataset, it should look and sound good.
  • If you are using colab, zip your dataset and follow whatever instructions to upload it where it needs to be, if local ignore this.

Training Locally

I recommend Mainline RVC, edited using https://rentry.co/Main-RVC-Edits (the edits might not be accurate for mangio, rvc studio, but it might work for applio)
If you want to use Mangio-Crepe as your f0, then use the Mangio fork, RVC Studio, or Applio.

  • Name and choose what sample rate you want for your model (most models tend to end up as 32k, rarely will it push 40k, and like 1 in a million will be 48k you can verify in a spectrogram, if consistently 20k then its a 40k model, consistently 24k then 48k model, anything below 18khz on a spectrogram just make a 32k model, youll get much better results.)
  • Enter path to your dataset if you didnt edit rvc and press process data
    • while it is processing, choose what you want to train with RMVPE_GPU is typically the best option to use here, if you have Mangio Crepe, you can also use that, but for the most part RMVPE_GPU should be what you use.
  • Do your feature extraction when "end preprocess" appears on process data
  • Always train index first, it's faster, then train your actual model
    • if you edited the file and set your default to 48k sometimes it'll error, to fix just go back to the top, change your model to 40k, then back to 48k after a few seconds
  • Batch Size, how many clips it grabs, and will learn the similarities between the clips, if you have a bad dataset, ie too lazy to clean a lot out, higher works better, if you have a really good dataset, lower works better.

The files you want on mainline rvc will appear in the following folders

Assets/Weights
logs/model name

  • If you plan on uploading your models to AI Hub, you'll need to upload to Huggingface in a zipped folder that contains ADDED_ index, and your Model.pth

Training on Colab

Check in AI Hub and see what colab is being used, to my knowledge it is RVC Disconnected.
https://colab.research.google.com/drive/1XIPCP9ken63S7M6b5ui1b36Cs17sP-NS#scrollTo=ZodNcumpg-JM (if it gets updated i will not update link, and if GUI's come back just follow the local after you update your dataset to drive)

  • Run the Dependency box to install everything needed, and while it is doing that fill out Step 2 the Training Variables
    • Experiment name is the model name, aka who you are making your dataset on
    • Most models will tend to be 32k, rarely will they be 40k, and a extremely rare outcome will be a 48k model, you can look at a spectrogram view and figure it out, whatever you see it constantly averaging for its freq, say 18k, if doubled gives you 36k, which means you can risk your model doing 40k, if its unable to do so your S TH F sounds will have semi distortion, or you can semi kneecap your model but guarantee it working as 32k.
    • You can also change the pitch extraction algorithm if you desire, but RMVPE and Mangio Crepe are the most common as they are just good
  • When step 1 is done, run step 2 which is everything you just filled out
    • Upload your zipped dataset to your google drive, and whatever you name it is what you put here
    • Run preprocessing, Feature Extraction, then save the preprocessed to your drive
  • Load your preprocessed data
    • skip to index training, and train index first, its really fast.
  • Scroll back up and set up everything you need for training, defaults are fine...
    • you can also change them, since some models will need to be trained even more than what is listed.
  • Finally export your model
    • If you plan on uploading your models to AI Hub, you'll need to upload to Huggingface in a zipped folder that contains ADDED_ index, and your Model.pth
Edit
Pub: 27 Oct 2023 02:41 UTC
Edit: 28 Jul 2024 20:08 UTC
Views: 11782