PyTorch with Python's wave module is definitely a good approach. Here are the main architectures you'd want to consider:
For learning improvisation patterns:

RNNs/LSTMs: Good for learning melodic sequences and phrase structures
Transformers: Excellent for capturing long-range musical dependencies
VAEs (Variational Autoencoders): Great for learning a "style space" you can sample from
GANs: Can generate very realistic audio but trickier to train

For audio generation:

WaveNet/WaveRNN: Generate raw waveforms directly
Mel-spectrogram + vocoder: Generate spectrograms, then convert to audio
MIDI-based: Learn note sequences, then synthesize

A hybrid approach might work well for your use case:

Convert your improvisations to a more structured representation (MIDI, note sequences, or mel-spectrograms)
Train a generative model on these representations
Use the wave module to synthesize the output back to audio

The wave module is perfect for the I/O part - reading your recorded improvisations and writing the generated audio. PyTorch handles the learning and generation logic.
Would you like me to show you a basic framework for this? I could create an example that loads audio with wave, processes it for training, and sets up a simple generative model architecture.

Edit
Pub: 16 Jun 2025 04:03 UTC
Views: 5