[GUIDE] Voice cloning with TalkNet
TalkNet is currently the most versatile FOSS method to clone a voice. Not only can you create great sounding TTS models, but those models can also follow reference audio, which makes it perfect for deepfaking.
There are alternatives to TalkNet such as Tacotron2, which you can train with the Voice Cloning App. It's not as flexible as TalkNet but it's still a great option, though not so much for deepfakes as it cannot follow a reference. The Voice Cloning App allows you to create datasets (a feature you may want to use even if you train with TalkNet), train a model, and synthesize audio in just a few clicks. If you're interested in learning how to use the Voice Cloning App, please check out @TMBDF's guide.
If you just want the best results regardless of cost or privacy, there are proprietary options such as ElevenLabs.io. They offer the highest quality & simplist way to clone a voice.
Voice Cloning with TalkNet
I will try to explain all necessary steps below, but you may also find this guide from "Awesome Face" @ UberDuck.ai useful - https://uberduck-guide.gitbook.io/uberduck-written-tutorial/extra-stuff/talknet-usage
The method above uses Google Colab, which doesn't require a local GPU for any part of the process. There are ways to train locally (WSL or Ubuntu), which I will explain below.
Creating a voiceset
A voiceset is usually comprised of a folder of 1-10 second .wav files, and a single text file that lists their transcripts, such as -
pokimane.txt
Pokimane_00001.wav|Hi it's Poki, today we're gonna relax and watch some videos.
Pokimane_00002.wav|I don't know, what do you think.
Pokimane_00003.wav|Blah blah blah.
etc.
To create a voiceset, you need transcribed audio of the voice you want to clone. The easiest way would be youtube videos and youtube subtitles, or audiobooks and ebooks.
You can download a youtube video/audio with a tool such as YT-DLP, and the subs from a site like DownSub.com.
Manual youtube subtitles should be the most accurate transcriptions, but there are also things like Whisper & WhisperX that can transcribe extremely well.
I train with TalkNet, but I still use the Voice Cloning App to create voicesets. Once I have the audio/video, I pass it to the VCA along with the transcript and the app does the rest. You should now have a voiceset folder with a bunch of .wav files & a file called metadata.csv. I rename this file to "celebname.txt" for use with TalkNet, but it's the exact same file.
This file will have transcription errors... for the best models you'll want to listen and read this line by line to make sure the audio is correct. You can get away without doing this step & it might still sound okay, but it definitely won't sound as good as it could. Simlar to not checking a faceset you've just made, it will still work but you really should do it.
How much data you need depends on the quality of the data and how accurate you want it to sound. 15 minutes might be okay, but I aim for at least an hour, ideally more.
The good news is you can potentially skip this step because voicesets can be shared just like facesets.
Training
You can train a model on colab or locally with your own hardware. Training works on Ubuntu or Windows WSL (Win 10/11), it's also possible to train on windows natively but it can be very painful to setup. WSL is much easier and I've found the performance to be fine. After setup and training, my WSL takes up around 30GB.
Ubuntu -
These are only rough instructions. There may be mistakes or things you'll have to do differently.
First, install Anaconda. Then -
git clone https://github.com/SortAnon/ControllableTalkNet.git
cd ControllableTalkNet
git clone https://github.com/SortAnon/hifi-gan.git
conda create -n talknet python=3.8
conda activate talknet
pip install tensorflow2.4.1
Save the list below as req.txt in your ControllableTalkNet directory.
Spoiler: req.txt
numpy1.21.0
scipy1.7.0
dash1.21.0
pytorch-lightning1.3.8
dash-bootstrap-components0.13.0
jupyter-dash0.4.0
psola0.0.1
wget3.2
unidecode1.2.0
pysptk0.1.18
frozendict2.0.3
torchaudio0.8.1
torchtext0.9.1
torch_stft0.1.4
kaldiio2.17.2
pydub0.25.1
torchmetrics0.6.0
hydra-core1.2.0
pyannote.audio1.1.2
g2p_en2.1.0
pesq0.0.2
pystoi0.3.3
crepe0.0.12
resampy0.2.2
ffmpeg-python0.2.0
tqdm
gdown
editdistance0.5.3
ipywidgets7.7.2
torchcrepe0.0.15
taming-transformers-rom15040.0.6
einops0.3.2
tensorflow-hub0.12.0
omegaconf2.2.3
hmmlearn0.2.6
pip install -r req.txt
conda install -c conda-forge cudatoolkit=11.0 cudnn=8.0
pip install jupyterlab
jupyter-lab
Windows WSL -
These are only rough instructions. There may be mistakes or things you'll have to do differently.
Turn on "Windows subsytem for Linux" in your "Turn Windows features on or off" options.
Make sure virtualization is enabled in your bios.
Restart.
Run powershell as admin
wsl.exe --update
wsl.exe --install -d ubuntu
wsl ~ -u root
git clone https://github.com/SortAnon/ControllableTalkNet.git
cd ControllableTalkNet
git clone https://github.com/SortAnon/hifi-gan.git
wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh
bash Anaconda3-2022.10-Linux-x86_64.sh
conda create -n talknet python=3.8
conda activate talknet
pip install tensorflow2.4.1
pip install jupyterlab
pip install ffmpeg-python
sudo apt install build-essential
sudo apt install zip unzip
Save the list below as req.txt in your ControllableTalkNet directory.
Spoiler: req.txt
numpy1.21.0
scipy1.7.0
dash1.21.0
pytorch-lightning1.3.8
dash-bootstrap-components0.13.0
jupyter-dash0.4.0
psola0.0.1
wget3.2
unidecode1.2.0
pysptk0.1.18
frozendict2.0.3
torchaudio0.8.1
torchtext0.9.1
torch_stft0.1.4
kaldiio2.17.2
pydub0.25.1
torchmetrics0.6.0
hydra-core1.2.0
pyannote.audio1.1.2
g2p_en2.1.0
pesq0.0.2
pystoi0.3.3
crepe0.0.12
resampy0.2.2
ffmpeg-python0.2.0
tqdm
gdown
editdistance0.5.3
ipywidgets7.7.2
torchcrepe0.0.15
taming-transformers-rom15040.0.6
einops0.3.2
tensorflow-hub0.12.0
omegaconf2.2.3
hmmlearn0.2.6
pip install -r req.txt
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.c...repo-wsl-ubuntu-12-0-local_12.0.0-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-0-local_12.0.0-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-0-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda
Exit out and restart WSL
jupyter-lab --allow-root
Colab -
https://colab.research.google.com/g...alkNET-colab/blob/main/TalkNet_Training.ipynb
Make sure to "Copy to Drive" OR "file>Save a copy in Drive".
The steps are all well annotated.
I would advise against using step 12. It will just packages up your model, which stops you from being able to train any further. The files it packages up are already accessible in step 11. Just download these four files -
g_00000000 > rename this to "hifiganmodel"
TalkNetDurs.nemo
TalkNetPitch.nemo
TalkNetSpect.nemo
Synthesis
The fun part. If you have an Nvidia GPU, you have the option to synthesize locally with this app - https://github.com/SortAnon/ControllableTalkNet
If you don't have a GPU, you'll need to use Colab - https://colab.research.google.com/g...ET-colab/blob/main/Controllable_TalkNet.ipynb
How to use your custom model -
Local -
You'll want to put the above model files inside a folder such as "pokimane", and then put that in the "models" folder inside the "ControllableTalkNet" directory. Create the models folder if it doesn't already exist. You can then select "custom model" and input the name of the folder, or point to a zip on your google drive. You can also edit the "sortanon_models.json" in "model_lists". Delete the existing models & add in your own categories & models. This will allow you to select your models in the drop down menu instead of writing in the name every time.
Colab -
You'll want to add the above model files to a zip file such as "pokimane.zip" and upload it to your google drive. Make sure the files inside the zip are not in a folder, zip the bare files.
Once uploaded, click "Get a link", set the link to "Anyone with a link" and copy it. You only need the unique part of the link.
For example, with the link below you would need to copy 1Wm-7gqws0B3j8mtSQa77Tdk3RZwHEwT5 and paste it inside the "Custom Model" field in the model selection drop-down.
drive.google.com/file/d/1Wm-7gqws0B3j8mtSQa77Tdk3RZwHEwT5
With TalkNet you also have the ability to use a reference file. Meaning you can emulate the natural pacing & emotion of the original speaker, instead of the more robotic text to speech. To do this just place the audio file in the root ControllableTalkNet folder and select it from the drop-down. You will also need it's transcription. I would use a short clip at first to test. See below for an example of using a reference file.
Examples
Gibi ASMR -
https://soundgasm.net/u/deepfakes/Gibi-ASMR-TalkNet-Harry-Potter-test
https://mrdeepfakes.com/video/26073/mommy-gibi-ruins-you-voice-clone