Guide for deiteris' optimized W-Okada RealTime Voice Changer Client (Fork)


Credits:

  • Shad: made this guide
  • vtarcelia: corrections
  • Nick088: contributions & reorganization
  • Lyery: some other info.
  • deiteris: most technical information & making the fork version.
  • wok: making the original wokada

Latest Version b2332 from December 2024

If you need HELP, ask in AI HUB Discord Server, we, along with others, will help you in the support channels



System & Hardware Requirements


System requirements

Either one of those:

  • Windows 10 or later
  • macOS 12 Monterey or later. With Apple Silicon or Intel CPU
  • Linux

and

  • RAM: at least 6GB.
  • Disk space: at least 6GB of free disk space. For fast model loading, SSD is recommended.

For GPU-conversion

TLDR: make sure you have Nvidia RTX 20xx or AMD Radeon RX 5xxx or better. Usually an RTX 4070 is the best for using it along GPU intensive games. GTX 10xx or RX 580 will also work, but may run into issues with games and higher delay. If you have an iGPU (mostly AMD Radeon Graphics or Vega) use online hosted alternative instead.

--

Longer answer:

Minimum:

  • An integrated graphics card: AMD Radeon Vega 7 (with AMD Ryzen 5 5600G) or later; with 2GB VRAM (in FP32 mode), ~1GB VRAM (in FP16 mode, if supported). But this is NOT recommended at all and we will most likely not recommend you to download the voice changer with iGPUs.
  • A dedicated graphics card: Nvidia GeForce GTX 900 Series or later, or AMD Radeon RX 400 series or later, or Intel Arc A300 series or later.

Recommended:

  • A dedicated graphics card Nvidia GeForce RTX 20 Series or later, or AMD Radeon RX 5000 series or later, or Intel Arc A500 series or later.

For a realistic experience, you should use a dedicated graphics card of NVIDIA RTX , AMD Radeon RX or Intel Arc.

The integrated graphics cards, such as Radeon Vega, especially on laptops, are already in use to run your monitor display so they are "strained" in that sense, and will have higher chunks of delay using a voice changer on top of it, even more trying to run a game.


For CPU-conversion

TLDR: don't bother. You can't run games, discord usage might be the only thing that will work decently, but you might potentially damage your CPU. People with no GPU usually have old CPU's, so delay will be high too. Not worth it.

--

Longer answer:

Minimum:
Intel Core i5-4690K or AMD FX-6300.

Recommended:
Intel Core i5-10400F or AMD Ryzen 5 1600X.

CPU-Conversion is not recommended at all

If you plan on playing games at the same, do not use CPU-conversion. With CPU, the delay will be massive and your PC will not run smoothly at all. If you have a higher-end CPU you can make it work, but those that have higher end CPUs most likely also have higher end GPUs, so you should be using your GPU if possible.


Online Alternatives [Colab/Kaggle]


This section is not finished

You can join the AI Hub Discord Server and ask for help in the #help-w-okada channel.

Online/Cloud alternatives run on Remote Good PC GPUs, we offer 2 free options, but if you want you could also use your preferred cloud computing site.

Colab Link

Google Colab | Google Colab Guide
Google Colab offers a max of 4 gpu hours not granted daily in the free tier. You run all the needed cells one by one, create an ngrok account which is for free. After you started "server ngrok", an URL will be created - you open that URL, and that's where the voice changer will be on.There's a rough tutorial included inside the link, and also a more simplified tutorial which might be easier.

Kaggle Link

Kaggle | Kaggle Guide
Kaggle offers 30 hours weekly granted of GPU and needs cell phone number verification, it's longer to set up, but once you have it, it is worth it. On Kaggle there should be an option to import from Colab, which would be the fastest way to set it up.

You will need a virtual audio cable in both cases, don't worry it will run fine as it's not an intensive task like running AI on your PC.


Virtual Audio Cable


Virtual Cable is what you will need to use the voice changer on Discord & Games!!!

  • Run setup64 , not 64a, after extracting the zip to a new folder
  • After installing the Virtual Cable, it changes your default audio system. Click Yes when it asks you to open the audio device settings (or press WIN+R, type "mmsys.cpl" if you closed it already), and change your Recording and Playback devices back to your usual devices. Same for communications device aswell (right click -> set as default communication device)

VB Cable Audio has been reported from users to give random issues, it's best to use VAC Lite on Windows


Mac Virtual Cables

Blackhole Virtual Audio Cable
VB-Audio
You only need one of these.


Linux Virtual Cables

You can forward audio from application to device with PulseAudio. Basically choose Pipewire for input and output on the voice changer, then you can forward output in PulseAudio. Google if needed


Windows


Which one do I download?

You download based on your GPU. You don't know what GPU you have? Open Task Manager > Performance tab and check for your GPU0 and GPU1 names. Prioritize the Nvidia one if you have one, else use the other.

Use Online Hosted if you have an integrated GPU (AMD Radeon Graphics ; AMD Radeon Vega ; Intel UHD) and if you do not have a GPU at all

Alt Tag


Download for NVIDIA GPU on Windows


NVIDIA

Latest as of 7th December 2024: nvidia-b2332 (click here to download)

PLEASE READ: If you get a "Failed to download or verify pretrains" error, head over to this section

If you have a GTX 700 card or below, use AMD/Intel version instead.


Download for NVIDIA RTX 5000-series GPU on Windows


NVIDIA RTX-5000 series, the newest release of GPU's, require a separate download. You do not need it if you have an older GPU, follow the normal Nvidia link in that case

https://github.com/IllIlIlIllIl/voice-changer/releases/tag/b2335

PLEASE READ: Download all 3 files, then extract the .zip file, it will automatically extract ALL 3 FILES into one. Then open the MMVCServerSIO folder and run MMVCServerSIO.exe (or called MMVCServerSIO if you don't have extensions activated).


Download for AMD, INTEL and CPU on Windows


AMD & INTEL ARC / CPU

Latest as of 7th December 2024: dml-b2332 (click here to download)

PLEASE READ: If you get a "Failed to download or verify pretrains" error, head over to this section

Intel UHD Graphics do NOT work at this point in time. Use Online Alternative.


Opening on Windows


Make sure you have 7zip or WinRAR for extracting / unzipping, you can also use other programs but those 2 are the most used ones.

After the download, you extract the zip file. You open the folders until you see an exe application called MMVCServerSIO and run that.

If nothing opens after a while of codes loading in, then open a browser and type in http://127.0.0.1:18888/. This is a local URL, it runs on the WebUI.


The next step for Windows is uploading your voice model


Mac



Download for Mac Silicon


Apple Silicon (Apple M1, etc.) users

Latest as of 7th December 2024: arm-b2332 (click here to download)

PLEASE READ: After the download, proceed to Opening Mac step because of Mac's paranoid security


Download for Mac Intel


Apple Intel users

Latest as of 7th December 2024: macos-amd-b2332 (click here to download)

PLEASE READ: After the download, proceed to Opening Mac step because of Mac's paranoid security


Opening on Mac


  • Double click the voice-changer-macos-arm64-cpu.tar.gz file for Silicon users | voice-changer-macos-arm64-cpu.tar.gz for Intel users . The voice changer will unpack and the MMVCServerSIO folder will appear.
  • Open the extracted MMVCServerSIO folder.
  • Double-click MMVCServerSIO to run the voice changer.

Apple quarantine stops you from running the voice changer

You do not get a popup notification for this, so if it does not open or says "Pytorch is damaged", do the following:

  1. Open Terminal
  2. Run the following command: xattr -dr com.apple.quarantine <PUT IN THE PATH TO YOUR MMVCServerSIO FOLDER HERE>
    For example, if you extracted the voice changer to your desktop, the command may look as follows:
    xattr -dr com.apple.quarantine ~/Desktop/MMVCServerSIO
  3. Now, open the extracted MMVCServerSIO folder and run MMVCServerSIO to run the voice changer.

If nothing opens, then open a browser and type in http://127.0.0.1:18888/. This is a local URL, it runs on the WebUI.


Linux



Download for NVIDIA GPU on Linux



Download for AMD GPU on Linux



Opening on Linux


Install portaudio with sudo yum -y install portaudio. Installation of CUDA Toolkit or AMD HIP SDK is NOT REQUIRED. All other necessary libraries are bundled with the application.
I'm not sure about the capabilities of UI tar archive extractors, but you can extract these archive parts with the following command that will merge them and extract: cat voice-changer-linux-amd64-cuda.tar.gz.* | tar xzf - (change cuda to rocm if necessary).


Opening on Multi PC setups


This is if you have two PCs for example and wish to run the voice changer on one PC and open it on the other.

Explanation on GitHub can be found here: https://github.com/deiteris/voice-changer/issues/180#issuecomment-2359166278

In short:
You create a file named .env on the same folder where MMVCServerSIO.exe is located. Open it up with a notepad, copy paste the settings from the GitHub link.
After that, you create another file with the file extension ending .bat, open it up with a notepad, copy paste what is needed in there again from the GitHub link.

Now run the bat file. After it starts, you should be able to open the link. For example, if you specified HOST=192.168.0.1 and ALLOWED_ORIGINS='["https://192.168.0.1:18888"]'), you should be able to open https://192.168.0.1:18888 in your browser and use the voice changer UI from other machines in your local network.


Voice Models


Uploading models

alt tag

  • Click on Edit on the small blue square located around the the top left side
  • Pick any slot you want, click upload
  • Only RVC models will work. If you have a gpt-sovits one or any other, they will not work.
  • Select Type: RVC, then select file on the Model slot and upload your .pth file.
  • No need for an Index file, but you can upload it. This controls the accent of the voice model

Voice Models to try out

Head over to this section for voice models to try out

Deleting models

If you wish to delete a model, you can overwrite the slot with a new model. If you insist on fully emptying a slot for whatever reason, head over to the model_dir folder, open the folder of the slot number you want to delete, and delete the model from that folder


Audio Setup


Discord & Games


On the voice changer app wokada, you select:

  • Input: Your microphone
  • Output: Virtual Cable
  • Monitor (if you wish to hear the voice changer on your headphones aswell): Your headphones

On discord and games, you select:

For Linux, read the Virtual Cable step

For the next step, you may read the rest of the guide coming now, or head to settings


Explaination CLIENT and SERVER mode


Audio: CLIENT

  • uses MME (normal audio processed through windows. You use this automatically with every application)
  • You can use the boxes echo, sup1, sup2 using this

Audio: SERVER

  • use S.R. 48000
  • I recommend using [Windows WASAPI] on all prefixes for less delay, because this uses your audio devices (e.g. microphone) directly, before processing through windows.
  • Both Input and Output has to be the same (Windows WASAPI), you can't use MME for input and then Windows WASAPI for Output. You may also use ASIO.
  • You can not use the in-built noise suppressions in this mode.

--

ASIO > WASAPI > MME as a general thumbrule (this also affects delay)

--

Sometimes Client does not work, then use SERVER with prefix "MME" or "Windows WASAPI". You can not use the in-built noise suppression and echo fix if you use SERVER.


Explanation of all functions


PASSTHRU button: sends your actual voice and not the voice changer through the virtual cable. You want this to be GLOWING GREEN or GREY (grey for dark mode users) for the voice changer to work.
F0 det.: pitch algorithm. Don't use crepe because a youtube tutorial guy told you to, recommended settings are down below on this guide
Chunk: controls the delay (lower number means less delay, but please check out the recommended settings for what your GPU is capable of)
Extra: controls voice model quality. 2.7s is the max, anything above is not worth it and can cause issues for no benefit


VOL:
in: This raises the microphone volume before it goes into the voice changer (Recommended to leave it on the default or if needed, not to go too high, else it increases background noise)
OUT: Raising voice changer volume on the output
MON: Increases volume of your headphones that you set on "mon" if you selected to hear yourself with the voice changer


Pitch: This is the pitch. Going into negative will make it lower pitch, going higher will make it higher pitch. If you have a male voice using a female voice, aim for 10 - 14, this depends on your voice, try around those numbers until you find a sweet spot.
Formant Shift: Alters harmonic frequencies and changes the voice timbre without affecting the pitch
Index: This controls the accent of the voice model. In most cases, using Index on Realtime Voice Changer can add an autotune-like sound. If you have a heavy foreign accent, you may use this at a low rate. Beware, this increases CPU/GPU usage


In. Sens.: microphone threshold, increasing this will cause less background noise to get picked up if it's a problem
Sup2: Noise suppression on your microphone.
Sup1: Noise suppression but weaker, not recommended to use this at all, because it barely has any impact whilst reportedly, making the voice inconsistent
Echo: if you experience echo issues despite having sup2, In. Sens to the right and having lowered your windows system value, then this will help you as a last resort


Settings


Chunk: is the delay in ms, this depends on your GPU. Read known working settings for your GPU
Extra is the voice model quality in seconds, 2.7 is recommended. Anything higher can improve quality, but sometimes cut off words at the start. If you notice it is cutting off words, use 2.7 max.

F0 Det.: rmvpe for Nvidia ; rmvpe_onnx on the other versions unless specified other on the guide. I recommend fcpe for older GPU cards like Nvidia GTX, AMD RX like 580. rmvpe is the best quality and most robust one, while fcpe is worse quality but faster. Other options are not suggested since they are outdated.


Advanced Settings


Advanced Settings

Protocol: rest
Crossfade length: 0.1 or 0.15 (0.1 for fastest voice, 0.15 for improved quality but increases delay by 50 ms. It is NOT recommended to go below 0.1)
SilenceFront: Reduce GPU usage when idle. This only reduces GPU resources when you're not talking or making sounds.
Force FP32 mode: on (THIS IS OFF BY DEFAULT!) Turning this on improves stability, significantly reduces glitching/artifacting, increases VRAM usage by 200 MB.
Disable JIT compilation: off for faster loading speed of the program, on for slightly better performance (10-15 ms) for Nvidia only)
Convert to ONNX: Reduces delay and slightly reduces gpu usage. Enabling this increases CPU usage by around 5-10%. Reduces the quality of the voice a bit. If you decide to enable this, pair it with rmvpe_onnx for even less delay.
Protect: Reduces the occurrence of robotic sibilants and robotic breathing, but also reduces the effect of the index file. Lower values increase this protection, higher values decrease it. The default value is 0.5, which means that the protection is disabled, reduce this value to 0.33 to enable it.


Finding my own settings for Chunk


Finding my own settings

Start with 512 ms chunk & 2.7s extra. Check what number your perf is and go closer to that number but not lower.
Example: if your perf is 200, go down to 260 with your chunk.
Chunk affects perf value, and Extra as well.

if you dont see a picture here, report it on the #help-w-okada channel on the AI Hub server
If your perf value is green, your selected chunk is stable. You can experiment and go down in chunk for less delay, or increase extra for more quality (would not recommend to go above 2.7s extra. Anything above uses more resource for no clear benefit).

if you dont see a picture here, report it on the #help-w-okada channel on the AI Hub server
If your perf value is yellow, your selected chunk is enough, but audio may be unstable if you run other processes at the same time. Operation in this range will also incur high GPU usage. Increasing Chunk size or reducing Extra is recommended.

if you dont see a picture here, report it on the #help-w-okada channel on the AI Hub server
If your perf value is red, the voice changer is unstable. Increase chunk size or reduce Extra.


Known working settings for Chunk and Extra


These settings are intentionally higher than what your GPU is capable of

If you are playing a video game with the voice changer, you will have to increase the chunk higher than what you usually can handle.
This is because the game runs on GPU and the voice changer aswell. The game will always take higher priority by default, so the listed settings are safe options that should run with most games.
If you run into issues, you will need to lower quality and limit your FPS, or increase chunk. It is best to first tweak your game's settings first

It is recommended to go up to Finding my own settings after you are comfortable with the program

If you are on Linux, then I suggest reading the part above this Finding my own settings since Linux performs much better than Windows.

for NVIDIA on Windows

F0 det: rmvpe

NVIDIA GPU's Recommended Settings for Gaming
RTX xx90 (e.g. 3090) 72 ms chunk + 2.7s extra
RTX xx80 Ti (e.g.3080 Ti) 72 ms chunk + 2.7s extra
RTX xx80 (e.g. 3080) 192 ms chunk + 2.7s extra
RTX xx70 Ti (e.g. 3070 Ti) 128 ms chunk + 2.7s extra
RTX xx70 (e.g. 3070) 128 ms chunk + 2.7s extra
RTX xx60 Ti (e.g. 3060 Ti) 192 ms chunk + 2.7s extra
RTX xx60 (e.g. 3060) 192 ms chunk + 2.7s extra
RTX xx50 (e.g. 3050) 192 ms chunk + 2.7s extra
GTX 16xx-series 256 ms chunk + 2.7s extra
GTX 10xx-series 320 ms chunk + 2.0s extra
GTX 900-series 384 ms chunk + 1.0s extra
MX 330 704 ms chunk + 0.6s extra

for AMD RX on Windows

F0 det: rmvpe_onnx

AMD GPU's Max settings
7xxx XT cards 128 ms chunk + 2.7s extra
6xxx XT cards 128 ms chunk + 2.7s extra
5xxx XT cards 192 ms chunk + 2.7s extra
7xxx cards bugged 256 ms chunk + 2.7s extra
6xxx cards 192 ms chunk + 2.7s extra
5xxx cards 256 ms chunk + 2.0s extra (not tested)
RX 6600M 192 ms chunk + 2.7s extra
RX 580 perf number + 60 ms chunk
RX 570 perf number + 60 ms chunk
RX 560 perf number + 80 ms chunk

for AMD integrated graphics on Windows

F0 det: rmvpe_onnx , try fcpe_onnx aswell

AMD iGPU's Chunk + Extra
AMD Radeon(TM) Graphics (with Ryzen 7 5800H) 256 ms + 2.7s extra
AMD Radeon RX Vega 10 (with Ryzen 7 3700U) 600 ms + 0.6s extra
AMD Radeon RX Vega 8 (with Ryzen 3 3200G) 700 ms + 1.0s extra
AMD Radeon Vega 8 (with Ryzen 5 8500G) 500 ms + 1.0s extra

Mac & CPU's

Mac & CPU's F0 Det. + Chunk + Extra
Mac M1 fcpe ; for chunk check the perf number and add 50 to it ; 1.0s extra
Mac M1 Air fcpe + 230ms + 2.7s extra
Mac M1 Pro rmvpe_onnx + 450ms + 2.7s extra
Mac M2 rmvpe_onnx + 650ms + 1.0s extra
Mac M2 Air fcpe ; for chunk check the perf number and add 50 to it ; 2.7s extra
Ryzen 7 5800x rmvpe_onnx + 260 ms + 0.6s extra

Extras


Information


Best choice for AMD GPU users

This fork is a lot better for AMD GPU's compared to the original w-okada. On the original it requires converting models to onnx models which is annoying, requires more CPU and GPU resources, has a lot more delay and other little inconveniences/bugs.

Example: AMD RX 6650 XT lowest latency is 298 ms chunk on original w-okada. On this fork lowest latency is around 60 - 80 ms chunk.


Better for NVIDIA than w-okada

This fork is better for NVIDIA users who normally use the prebuilt w-okada version, because this version uses GPU accelerated extra compared to the original which uses CPU.

For the RTX GPUs the delay performance differences are minimal, but quality performance is better. For older cards like GTX or MX, this fork performs better in all aspects.

Example: NVIDIA RTX 3070 on prebuilt w-okada reaches 170 - 213 ms chunk latency. On manually set up environment of w-okada reaches 42 ms chunk latency. On this fork it can reach 30 - 38 ms chunk latency, depending on the extra set. Keep in mind these are settings tested to the max, without a video game or intense operations running in the background.


  • Less delay for all hardwares compared to w-okada prebuilt
  • Better voice model quality (the original prebuilt is bugged)
  • No onnx-struggles for AMD/Intel/CPU users
  • Frequent, user-friendly updates

Reduce more delay


Reducing more delay with WASAPI guide

https://rentry.co/LessDelayWasapi

Reducing more delay with ASIO guide. This can slightly decrease more delay but more to set up

https://rentry.co/lessdelayasio


Voice Models to try out


You might need to connect your account to weights.com to be able to download these models

To Download a Model from Weights.com:
Login
Click the 3 dots at the right of the image of the model
Click download
Download Anyways
Unzip the zip, and you might wanna rename the pth and index since all models on weights are renamed as 'model'

The default voice models from the original:
https://huggingface.co/wok000/vcclient_model/resolve/main/rvc_v2_alpha/tsukuyomi-chan/tsukuyomi_v2_40k_e100.pth
https://huggingface.co/wok000/vcclient_model/resolve/main/rvc_v2_alpha/kikoto_kurage/kikoto_kurage_v2_40k_e100.pth
https://huggingface.co/wok000/vcclient_model/resolve/main/rvc_v2_alpha/tokina_shigure/tokina_shigure_v2_40k_e100.pth
https://huggingface.co/wok000/vcclient_model/resolve/main/rvc_v2_alpha/amitaro/amitaro_v2_40k_e100.pth

Our suggestions:
Female:
Duckus Egirl voice made by lusbert
Psych2Go voice made by dan

Male:
Bob Ross voice made by dieseldog34
Markiplier voice made by hobqueer


Help


Press Enter to continue / Failed to download or verify files


After you start the program for the first time and it finished downloading files: if it says Failed to download or verify: ... followed by "Press Enter to continue" at the end, then the pretrain download failed. This can happen randomly. Here is what you will need to do:

Fix

Go to the "pretrain" folder in the MMVCServerSIO folder.
Delete everything inside it if there is anything.

Download ALL THE FILES from this drive:
https://drive.google.com/drive/folders/1OFfM9rmxCZkiYjxoK_yzhRbcXpt0TiJ0?usp=drive_link

Copy paste everything from this Google Drive inside the pretrain folder.

Then run MMVCServerSIO.exe again, this time it should work


Troubleshooting


There is an in-depth section on troubleshooting, please check out Troubleshooting on Github

Crackle Fix


Open Task Manager > Details

Right click audiodg.exe and set priority to High

Right click audiodg.exe again > set affinity > uncheck everything except CPU 2 (only ✅ CPU 2, turn off the rest)

With a program called ProcessLasso you can automate this to always be active, since Windows resets these sometimes.
Or you can open up CMD/Powershell (or make a bat file) and type in:

powershell "ForEach($PROCESS in GET-PROCESS audiodg) { $PROCESS.ProcessorAffinity=4; $PROCESS.PriorityClass='High' }" 

Discord Crackle Fix


Make sure to do the Crackle Fixes in this step before doing this to see if it fixes your issue

If the voice sounds fine in the app AND it sounds fine in games, but ONLY sounds weird on discord, then:

  • Turn off Echo Cancellation
  • Turn off Noise Suppression (sometimes causes issues, maybe not. Check for yourself)

Discord Server


AI Hub: https://discord.gg/aihub


FAQ



Why does it run in a browser and not it's own window?


Because it uses a Web User Interface (WebUI) coded in JavaScript & TypeScript, the majority of AI programs are designed to run on the browser since it can be used both on cloud and locally (on your pc), and it can also resolve some performance issues. The original wokada also ran on a WebUI, just that it made it's own window


What browser should I use?


It's better you try and test, some people had issues on Chrome, some others on Firefox, it might depend on the settings you use and also Java/Type Script having issues. The browser that usually is reported by most people to have issues is OperaGX, which is why we don't suggest it much


Why are most YouTube (Video) Tutorials old? Is there going to be an updated one?


YouTube Tutorials take way more time to make, and get outdated easily in this case, as AI progresses fast and continues to change in better, with more different settings and versions. Written guides are easier to update, since you don't have to remake an entire video. It's unknown if we will ever release a video since they easily get outdated, but if we will, it will be linked inside of this guide.


Do I need an extremely expensive mic for good quality?


We had a conversation about this in https://discord.com/channels/1159260121998827560/1159290161683767298/1352325982689951765 & https://discord.com/channels/1159260121998827560/1159290161683767298/1356265862704926907,
RVC works by downsampling your audio voice to 16khz because f0 estimators only works at that sample rate, after that the model outputs the results using it's original sample rate (without any upscaling). So there won't be the need of having a super extremely expensive, a decent one should do the job.


Is Wokada Deiteris Fork Safe? Are the RVC Models Safe?


Short answer: Yes

Detailed answer:
Wokada Deiteris Fork is Open Source, you can check the source code and see how there isn't anything bad.
RVC Models are PyTorch Models, a Python library used for AI.
PyTorch uses serialization via Pythons' Pickle Module, converting the model to a file.
Since pickle can execute arbitrary code when loading a model, it could be theoretically used for malware, but Wokada Deiteris Fork has a built-in feature to prevent code execution along the model.
Also, HuggingFace has a Security Scanner which scans for any unsafe pickle exploits and uses also ClamAV for scanning dangerous files.

Edit Report
Pub: 31 Jul 2024 12:56 UTC
Edit: 24 Apr 2025 18:38 UTC
Views: 40078