You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flash forward a few months from now, you can stream this in realtime, analyze and sync with microphone, ai jam along to us playing live music.
Please let me know of your estimation for when we could expect something like this, from this project it looks like only a couple steps away..
I cant wait !
Thanks and all the best!
Amazing project
Ps - on the practical side, to achieve this, you would need to listen to the mic and optionally other live midi inputs, auto determine the musical parameters, generate a bar , wait for and sync with the beat, continue listening, continue generating the next bar with respect to ongoing music, listen for key /chord changes , harmonize existing patterns, and motifs, predict drops/suspense/scilence (can also be clued in with dedicated signs from a human player) and play along live midi. Essentially creating an ai live-band
The text was updated successfully, but these errors were encountered:
Thanks for the suggestion! We unfortunately don’t have any milestone for such a feature as of now. This is a use case that we have in mind indeed, yet (very) ambitious 🙂
Thanks! @mehdizatar Happy to hear you have it in mind
Though it will take effort, I think on your end is to mostly to optimize the core and make the realtime generation/streaming possible, from there I think the community, myself included, will contribute the rest :)
Flash forward a few months from now, you can stream this in realtime, analyze and sync with microphone, ai jam along to us playing live music.
Please let me know of your estimation for when we could expect something like this, from this project it looks like only a couple steps away..
I cant wait !
Thanks and all the best!
Amazing project
Ps - on the practical side, to achieve this, you would need to listen to the mic and optionally other live midi inputs, auto determine the musical parameters, generate a bar , wait for and sync with the beat, continue listening, continue generating the next bar with respect to ongoing music, listen for key /chord changes , harmonize existing patterns, and motifs, predict drops/suspense/scilence (can also be clued in with dedicated signs from a human player) and play along live midi. Essentially creating an ai live-band
The text was updated successfully, but these errors were encountered: