Ornette an OSC-based command-line application to use ML models for continuous music generation. You can control (manually or programatically) ML-based music generation and playback via text commands, OSC messaging or with command-line arguments. You can also try out different models through the same interface.
The preview above shows Ornette running the PerformanceRNN model in the left panel and sending data to the SuperCollider instance to the right, which then generates sound (except when you record your screen as an SVG file 🤷🏻♂)
Ornette is an interactive container-based music generation Machine Learning model host and MIDI data workstation. Audio playback is completely delegated to external systems: at the moment, SuperCollider using the SuperDirt quark.
Ornette delegates audio synthesis to SuperCollider and uses Docker for environment management.
Ornette allows you to:
- Load MIDI files as prompts,
- Python > 3.0
- Pip
- Docker
- SuperCollider
- SuperDirt (a SuperCollider Quark)
- Clone the repo (
git clone git://github.com/ghalestrilo/ornette.git
) - Run
pip install -e requirements.txt
- Run the host with
python .
. You'll be prompted to choose an RNN model and bundle
- Optionally. pass
--modelname=<desired model> --checkpoint=<desired checkpoint>
to dismiss these prompts
- To hear playback, make sure SuperCollider is running SuperDirt (either
sclang
orscide
is fine)
Use the following command to start running MelodyRNN:
python . --model=melody_rnn --checkpoint=basic_rnn
Once started, you can issue commands to the serve, such as:
start
to begin playingpause
to stop playingreset
to clear current tracksave <filename>
to save current track to a midi file on theoutput
foldergenerate 1 bars
to generate a single bar. "1" can be replaced by any positive integer
Modules are listed on the modules
folder, and their bundles can be found on the .ornette.yml
file. Most of the models were developed by Magenta Research. These are the currently implemented modules:
modelname | checkpoint |
---|---|
melody_rnn | basic_rnn |
melody_rnn | mono_rnn |
melody_rnn | lookback_rnn |
melody_rnn | attention_rnn |
performance_rnn | polyphony_rnn |
performance_rnn | performance |
performance_rnn | performance_with_dynamics |
performance_rnn | performance_with_dynamics_and_modulo_encoding |
performance_rnn | pitch_conditioned_performance_with_dynamics |
performance_rnn | multiconditioned_performance_with_dynamics |
pianoroll_rnn_nade | rnn-nade_attn |
polyphony_rnn | polyphony_rnn |
- Fix generation bugs with current models
- Integrate new models
- (maybe) Core engine rewrite in Elixir with ratatouille as a front-end framework
- Improve UI
- Improve and extend controls
- Help tooltips
- Improved error reporting
- Model selection menu
- Refactor and improve module API
- Decouple modules from server using websockets for real-time data transfer
- Extend
.ornette.yml
functionality
- Send/receive MIDI Clock
- Implement/use MIDI backend alternatives to SuperCollider/SuperDirt