Skip to content
This repository has been archived by the owner on Aug 22, 2023. It is now read-only.

Programs with Baked Events vs Performaces #16

Closed
Tiedye opened this issue Jun 3, 2020 · 14 comments
Closed

Programs with Baked Events vs Performaces #16

Tiedye opened this issue Jun 3, 2020 · 14 comments
Labels
discussion Discussing a topic

Comments

@Tiedye
Copy link
Collaborator

Tiedye commented Jun 3, 2020

We've committed to supportting the following use case:

  1. Bob creates a "program" (analogous to a programming plate)
  2. Bob sends program to Alice
  3. Alice create a performance using the program

To make this workflow more user friendly, Bob should be able to indicate where he expects certain performance events to happen, like a hi-hat opening or closing, a certain fingering on the bass, etc.

Looking at the bass fingering case, currently this is supported by a fret parameter on the BassDropEvent. This allows a program to include melodic bass without requiring performance events inside the program. When a program with BassDropEvents is used to create a new Performance, upon playback the playback program will "Bake" these drop events into the list of events on the performance. These baked events will be marked as such with the bakeType property being set to EventBakeType::AUTO. These "baked" performance drop events can then be overridden by other PerformanceDropEvents Alice creates.

Note that this process requires information that is not implicit in the layout of the schema, it requires logic that is external to it. Also if other performance suggestions are to be included in a program and then "baked" into a performance on load, this would require special handling of each type of special data, which would lead to a not insignificant rise in complexity as the number of different types of performance "suggestions" increased.

Alternatively, when Bob is creating a program to send to Alice, he can instead create a proper Performance that includes all of the state changes that he is expecting to happen inside the performances list of events. Then when Alice receives the performance with the preexisting events, there is no need for a baking process to properly initialize the performance for her to begin working on it.

As far as I am aware, these are the reasons cited for the event baking approach:

  • Implementing a proper performance editor is hard/since performance events are based on time and not tick, it would be hard to make a program editor that works nicely
    I don't think making a intuitive performance editor is significantly more challenging than building a program editor, I'll look into creating a proof of concept to demonstrate (changing the time of performance events based on changes in tick rate is easy)
  • It is not simple to update a program after it has been used in a performance
    Since the baking approach is roughly equivalent to having a pre-programmed performance, there is no advantage to it, there may actually be an advantage to the pre-programmed performance as it will be possible to apply diffs to properly formatted lists of pre-programmed events (this is jsut an idea, not important to this proposal)
  • If there are any more feel free to mention them

The reasons against event baking approach:

  • Complexity, not easily extendable as for full "prebake" compatibility the performace Events have to be effectively double inside the DropEvents
  • Moves away from accurately representing the capabilities of the machine (a fret cannot be encoded on a programming plate)
  • Behaviour is no longer contained entirely within the schema, behaviour of performance editors/recorders will have to have developed their own well defined behaviour on how to bake events and handle baked events

Thus removing the entire idea of baking from the scheme will simplify the current schema (greatly simplify the future schema and implementations), simplify implementation of playback (and probably editing) tools of performances and programs, and allow the schema to continue to be as accurate a virtual representation of the mmx as possible.

Edit: previously this assumed the baking happens on program load, it is actually planned to happen during playback, and that more baked events are not currently expected

ollpu: to clarify: baking wouldn't happen during "Alice" loading the program. it would happen during recording the performance as the programming wheel turns
and I don't think we want to put any more performance hints into the program. bass is an exception where it is very convenient to compose a loop with actual melody

@micahswitzer
Copy link
Collaborator

I agree entirely.

I also think that if we want to move the fret property outside of the bass drop event, then we can simply add a fret pressed event to the set of timed events (performance events). I think this is actually the best solution as it keeps the schema representing real life (the bass marbles are dropped by the program, and the frets pressed by the performer), and it would not be that difficult at all to implement.

@Tiedye
Copy link
Collaborator Author

Tiedye commented Jun 3, 2020

@micahswitzer That's exactly what I was thinking, can possibly treat the finger position as a kind of second capo.

@ollpu
Copy link
Contributor

ollpu commented Jun 3, 2020

I also initially supported not having fretting in the program to be more true to the MMX, however:

I think the Alice-Bob protocol analogy is a little misleading here. The design here applies just as well in the case where there's just one user. Anyways, my view of a more accurate description of the workflow we want to support, which at least at first is the main workflow overall:

  1. The program (as you said, analogous to the programming plate) is first programmed by the user. This is the main musical loop.
    • While programming, all the controls that can be recorded into the performance (such as muting, tempo, etc) are available to the user, but they are not recorded onto a timeline. This simply allows the user to gauge how different states of the machine sound like while still composing the music. More concretely, you might want to mute some channels, try listening on another tempo or change the state of the hi-hat machine while composing.
  2. Once the program is more or less finished, the user can start recording a performance of it. The performance can include many revolutions of the programming wheel, changing tempo, muting/unmuting instruments, etc. throughout the recording. The user can pause the recording to change many parameters at once. It is also possible to change the bass capos or override fingering.
  3. Once recording the performance is done, it can potentially be edited later to some extent. This is a future-feature.
    • This would allow for more sophisticated rewrites of the bassline between revolutions as well, since that's not very convenient to do during recording. A bass note that has a fretting different from the original program would then have EventBakeType::MODIFIED_AUTO. This additional flag isn't strictly necessary but makes things a little clearer.

Since phase 1 is largely about composing, it is very convenient to be able to hear the actual bassline along with the rest of the song. Having the user change capos on the fly (or any other way of dialing the notes in) for each note of the bassline whilst just wanting to hear the composition in real time is not reasonable. Fretting is strictly tied to notes on the programming wheel, and we don't want the program to include other "performance hints". They belong in the performance.
In addition, as I understand it, our initial MVC only really includes phase 1. A lot would be missing if you couldn't compose a bass melody.

Here baking program events into the performance is done to simplify playback (and also editing) of a performance. If the events are not baked, the original program always has to be read along with the performance, and deducing what a bass note does requires looking into both event timelines.

@Tiedye
Copy link
Collaborator Author

Tiedye commented Jun 3, 2020

This idea that editing an performance being a future feature I think is a poor assumption, there is no reason not to develop the initial composing application to support composing a program and performance simultaneously, I alluded to this in the issue (I'll see if I can find time to make a prototype in the next couple days). Thus the idea that a more accurate schema would put obstacles in the way of a user composing a base line becomes inapplicable. Physically fretting is not strictly tied to notes on the programming wheel, so that should not be a requirement of the schema when a easy alternative is available.

Regarding the timelines, a performance really can't be considered without a program so I don't quite follow this point.

The reason I chose this use case is because it best supported the idea of baking, if someone is composing everything locally, then there is not reason to have the frets indicated on the drops as the editor will have full access to the performance event list.

@micahswitzer
Copy link
Collaborator

micahswitzer commented Jun 3, 2020

Once again, @Tiedye made my point exactly. Here's me saying pretty much the same thing:

If the events are not baked, the original program has always to be read along with the performance

I think that this is not an issue at all. At some point we need to decide what we care about more: the ease of software development, or the ease of using the actual software. Sure, baking in every program wheel event might make it easier to write the playback code, but what it sacrifices is the ability to modify the program separate from the performance. Now, like you already said, editing performances is not a part of phase 1, and I'm not saying it should be. I do however, believe that if we're careful with how we write the software and design the schema, we should be able to build a system that can support new features (such as performance editing) without requiring users to "redo" their performances.

@ollpu
Copy link
Contributor

ollpu commented Jun 4, 2020

The reason we earlier decided to sacrifice the ability to edit programs after recording performances is the multitude of issues it brings up. If the sacrifice is assumed, baking does make more sense, but I agree that the benefit isn't enough to warrant the sacrifice if it turns out combined program-performance editing is reasonable.

As an example, my "curveball": This has to do with how tempo is handled. Either:

  • The musical time of the performance and the time of the programming wheel are linked. In this case, what happens when you want to stop the programming wheel and e.g. play a solo? Time isn't running so no events can be dispatched. Solutions to this get messy.
  • The performance time and programming wheel time are not linked. Now maintaining constraints of the programming wheel (each revolution playing the same notes) gets hard since tempo can vary in all sorts of different ways.

Would be interesting to see how you solve these sorts of issues. If you think you can, leaving baking out of the equation might make more sense after all.

@ollpu
Copy link
Contributor

ollpu commented Jun 4, 2020

On the other hand, leaving baking in doesn't totally close the door for program editing. You can always replace the baked notes in the performance once you're done. Depending on how you implement the editor, those baked notes would practically already be visible in the editor.

@iansedano
Copy link
Contributor

Thanks for the great discussion all, its a very interesting read.

I personally supported baking only in the way that it moved us closer to representing a full performance on the marble machine. Whereas previously, I felt that the bass performance aspect of the machine was not really dealt with.

bass

I believe that the overall goal of the project is to allow people to compose music that Martin could conceivably play live. Martin would also have to be convinced of the quality of the composition. As an ex musician and composer, I can not emphasize enough the role of the bass in an overall composition.

I know that we have now moved well beyond the discussion of whether to only program the wheel, and are starting to think seriously about the performance. I welcome this as an essential discussion! I am not a particularly big fan of the "baking" solution either, but didn't see much alternative at the time, and I felt that energy on the development of the schema was quickly running out!

With all that said, I do also believe it would be beneficial for now to aim for a simplified version of the app, with just the programming aspect of the marble machine and none of the performance stuff (but keeping the door wide open for performance of course).

So am I right in thinking that the suggestion is to remove baking and extend a "performance events" class? Is someone able to make a pull request for this so we can see what it would look like?


Regarding timing:
I also agree that the timing solutions so far, with the programming wheel operating on ticks and the performance operating on absolute time, seems odd. Would it make more sense to have a global time operating on pulses and ticks and derive the time for the marble machine from that?


Would you guys agree that the bass performance aspect and the idea of having the marble machine "stopped dead" are the most complicated features? Any ideas for how can we develop and leave those features for last without totally caging ourselves out of it?

@mozi-h mozi-h added the discussion Discussing a topic label Jun 4, 2020
@mozi-h mozi-h pinned this issue Jun 4, 2020
@FelixWohlfrom
Copy link
Contributor

Hm, maybe I can also add my two cents.

My proposal would be to split programming wheel and performance.
The reason: The programming wheel has fixed parameters (number of ticks available, repetition every x ticks, all ticks with the same speed) whereas performance events might happen any time and e.g. also change the speed of the programming wheel.

So on UI level I would suggest two modes:

  1. Programming wheel mode - You see only the programming wheel with only the instrument channels for only a single rotation of the wheel.
    Playback is now tick-based.
    You can select any instrument, mute and unmute channels however you want during playback, or manually play an instrument.
    BUT: These events are only for playback and not stored until you switch to:
  2. Performance mode - The programming wheel data is now locked, and any events that you perform will now be stored on time base.
    Instead of having only a single rotation of the wheel and repeating all changes automatically after a full rotation, it is more like an endless band of multiple rotation wheels below each other. Depending on the configuration machine, in one repetition of the wheel some channels might be muted and in the middle of the playback the timing might also change. But all these events only affect the playback of the programmed melody, but never the program itself.

As @micahswitzer mentioned, we won't be able to update the program once we backed in also the time based events. I personally would prefer a solution where you e.g. could download the existing marble machine performance and then add a fancy bass solo to it (but please don't ask me for the solo once the implementation is done ;D).

An additional benefit of separately storing events and programming wheel would be simply efficiency in case of storage space - If we split the programming wheel from other events, we need to store the programming wheel data only for one full rotation of the wheel. If we want to store the full performance, we need to store multiple rotations, depending on the length of the performance, since during each rotation we could have different channels muted, different notes played manually etc.

Big advantage of the split approach would also be that we can continue with the existing programming wheel implementation and "simply" extend it with the performance information.

Regarding the global timing mentioned by @iansedano: This makes sense during playback in performance mode. But I would prefer to calculate this information on runtime (e.g. while switching from mode 1 to 2) and not store this in the exported files - So we either should create different schemas for export/import and "interal" usage or keep the global timing as implementation detail. Different timing models for different purposes may seem weird, but they are needed to represent the correct physics of the represented element.

TL:DR:
Split between tick based programming wheel and time based performance. Recalculate programming wheel events on runtime and merge with stored performance events.

@FelixWohlfrom
Copy link
Contributor

Addition from @ollpu in discord:
Maybe we can also add an initial configuration for the marble machine, that sets the muting state and bass/vibraphone settings and can be used while programming the machine in mode 1.

@Zakgriffin
Copy link
Contributor

I don't think this idea with saving performance events in "time" units, while saving program events in "tick" units is the best idea. I really think everything should be stored in terms of ticks since we shouldn't recalulate all events beyond a bpm change, we should just be reading through the saved events at different rates (with ticks as the only unit). This also works much much better for working with Tone.

I also agree repeated bass fretting should be saved in the the DropEvent. I think the place to draw the line between what data should or shouldn't be stored in program drop events come down to: "should the event be repeated for each program wheel cycle?", and "does the event pertain only to that specific drop? (no lasting state change)" This can be generalized not only to repeated bass fretting, but for example to repeated open hihat strikes. These events could be editted with a long press or right click on the programming pin.

Handling the "complete stop" issue i think is the only reason we're considuring a tick and time combo approach, but for that case, i think it would make the most sense if we just spawn a seperate ticked clock. That way we can consistantly say "slow down bpm to 0 by x tick, at x tick start solo event stream" and at the end of the solo event stream, essentially "return" or "pop" back to the previous stream.

@FelixWohlfrom
Copy link
Contributor

Since there was not really a progress recently, I just took some time to update the schema to use ticks and created a PR.
I fully agree with @Zakgriffin, having only ticks makes the life a lot easier and as long as we only split performance and programming wheel we should be able to handle all use cases with his approach.

@Tiedye
Copy link
Collaborator Author

Tiedye commented Jul 21, 2020

I stand by my previous statements regarding not including superfluous data in in DropEvents. The tick vs time discussion is somewhat outside the scope of this issue, but it is my opinion that time for performance events is easier to implement as using ticks for performance events creates special cases that need to be handled when the programming wheel stops.

@FelixWohlfrom
Copy link
Contributor

Yes, you are right, the original discussion was if we should store the final performance with "merged" information of programming wheel and performance data. Where I still would separate them to have later the possibility to change the "programmed" events independently of the performance events.

Having ticks for both will make it a lot easier to keep both events (drop + performance) in sync and, as far as I understood, would also make the implementation with our audio framework a lot easier. Special handing of programming wheel running vs. programming wheel stopped is either way needed - therefore I would not see this as an issue of having same vs. different timing measurement systems.

@mozi-h mozi-h closed this as completed in e8088fd Sep 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
discussion Discussing a topic
Projects
None yet
Development

No branches or pull requests

7 participants